Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
A fast RCS accuracy assessment method for passive radar calibrators
NASA Astrophysics Data System (ADS)
Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI
2016-10-01
In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.
Accuracy of remotely sensed data: Sampling and analysis procedures
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Oderwald, R. G.; Mead, R. A.
1982-01-01
A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.
Ground Truth Sampling and LANDSAT Accuracy Assessment
NASA Technical Reports Server (NTRS)
Robinson, J. W.; Gunther, F. J.; Campbell, W. J.
1982-01-01
It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
Gaber, Ramy M; Shaheen, Eman; Falter, Bart; Araya, Sebastian; Politis, Constantinus; Swennen, Gwen R J; Jacobs, Reinhilde
2017-11-01
The aim of this study was to systematically review methods used for assessing the accuracy of 3-dimensional virtually planned orthognathic surgery in an attempt to reach an objective assessment protocol that could be universally used. A systematic review of the currently available literature, published until September 12, 2016, was conducted using PubMed as the primary search engine. We performed secondary searches using the Cochrane Database, clinical trial registries, Google Scholar, and Embase, as well as a bibliography search. Included articles were required to have stated clearly that 3-dimensional virtual planning was used and accuracy assessment performed, along with validation of the planning and/or assessment method. Descriptive statistics and quality assessment of included articles were performed. The initial search yielded 1,461 studies. Only 7 studies were included in our review. An important variability was found regarding methods used for 1) accuracy assessment of virtually planned orthognathic surgery or 2) validation of the tools used. Included studies were of moderate quality; reviewers' agreement regarding quality was calculated to be 0.5 using the Cohen κ test. On the basis of the findings of this review, it is evident that the literature lacks consensus regarding accuracy assessment. Hence, a protocol is suggested for accuracy assessment of virtually planned orthognathic surgery with the lowest margin of error. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Soil quality assessment using weighted fuzzy association rules
Xue, Yue-Ju; Liu, Shu-Guang; Hu, Yue-Ming; Yang, Jing-Feng
2010-01-01
Fuzzy association rules (FARs) can be powerful in assessing regional soil quality, a critical step prior to land planning and utilization; however, traditional FARs mined from soil quality database, ignoring the importance variability of the rules, can be redundant and far from optimal. In this study, we developed a method applying different weights to traditional FARs to improve accuracy of soil quality assessment. After the FARs for soil quality assessment were mined, redundant rules were eliminated according to whether the rules were significant or not in reducing the complexity of the soil quality assessment models and in improving the comprehensibility of FARs. The global weights, each representing the importance of a FAR in soil quality assessment, were then introduced and refined using a gradient descent optimization method. This method was applied to the assessment of soil resources conditions in Guangdong Province, China. The new approach had an accuracy of 87%, when 15 rules were mined, as compared with 76% from the traditional approach. The accuracy increased to 96% when 32 rules were mined, in contrast to 88% from the traditional approach. These results demonstrated an improved comprehensibility of FARs and a high accuracy of the proposed method.
Assessment of the Thematic Accuracy of Land Cover Maps
NASA Astrophysics Data System (ADS)
Höhle, J.
2015-08-01
Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice
2017-02-24
Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.
Evaluating Rater Accuracy in Rater-Mediated Assessments Using an Unfolding Model
ERIC Educational Resources Information Center
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W.
2016-01-01
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
Lessons in molecular recognition. 2. Assessing and improving cross-docking accuracy.
Sutherland, Jeffrey J; Nandigam, Ravi K; Erickson, Jon A; Vieth, Michal
2007-01-01
Docking methods are used to predict the manner in which a ligand binds to a protein receptor. Many studies have assessed the success rate of programs in self-docking tests, whereby a ligand is docked into the protein structure from which it was extracted. Cross-docking, or using a protein structure from a complex containing a different ligand, provides a more realistic assessment of a docking program's ability to reproduce X-ray results. In this work, cross-docking was performed with CDocker, Fred, and Rocs using multiple X-ray structures for eight proteins (two kinases, one nuclear hormone receptor, one serine protease, two metalloproteases, and two phosphodiesterases). While average cross-docking accuracy is not encouraging, it is shown that using the protein structure from the complex that contains the bound ligand most similar to the docked ligand increases docking accuracy for all methods ("similarity selection"). Identifying the most successful protein conformer ("best selection") and similarity selection substantially reduce the difference between self-docking and average cross-docking accuracy. We identify universal predictors of docking accuracy (i.e., showing consistent behavior across most protein-method combinations), and show that models for predicting docking accuracy built using these parameters can be used to select the most appropriate docking method.
Use of refractometry and colorimetry as field methods to rapidly assess antimalarial drug quality.
Green, Michael D; Nettey, Henry; Villalva Rojas, Ofelia; Pamanivong, Chansapha; Khounsaknalath, Lamphet; Grande Ortiz, Miguel; Newton, Paul N; Fernández, Facundo M; Vongsack, Latsamy; Manolin, Ot
2007-01-04
The proliferation of counterfeit and poor-quality drugs is a major public health problem; especially in developing countries lacking adequate resources to effectively monitor their prevalence. Simple and affordable field methods provide a practical means of rapidly monitoring drug quality in circumstances where more advanced techniques are not available. Therefore, we have evaluated refractometry, colorimetry and a technique combining both processes as simple and accurate field assays to rapidly test the quality of the commonly available antimalarial drugs; artesunate, chloroquine, quinine, and sulfadoxine. Method bias, sensitivity, specificity and accuracy relative to high-performance liquid chromatographic (HPLC) analysis of drugs collected in the Lao PDR were assessed for each technique. The HPLC method for each drug was evaluated in terms of assay variability and accuracy. The accuracy of the combined method ranged from 0.96 to 1.00 for artesunate tablets, chloroquine injectables, quinine capsules, and sulfadoxine tablets while the accuracy was 0.78 for enterically coated chloroquine tablets. These techniques provide a generally accurate, yet simple and affordable means to assess drug quality in resource-poor settings.
Rapid condition assessment of structural condition after a blast using state-space identification
NASA Astrophysics Data System (ADS)
Eskew, Edward; Jang, Shinae
2015-04-01
After a blast event, it is important to quickly quantify the structural damage for emergency operations. In order improve the speed, accuracy, and efficiency of condition assessments after a blast, the authors have previously performed work to develop a methodology for rapid assessment of the structural condition of a building after a blast. The method involved determining a post-event equivalent stiffness matrix using vibration measurements and a finite element (FE) model. A structural model was built for the damaged structure based on the equivalent stiffness, and inter-story drifts from the blast are determined using numerical simulations, with forces determined from the blast parameters. The inter-story drifts are then compared to blast design conditions to assess the structures damage. This method still involved engineering judgment in terms of determining significant frequencies, which can lead to error, especially with noisy measurements. In an effort to improve accuracy and automate the process, this paper will look into a similar method of rapid condition assessment using subspace state-space identification. The accuracy of the method will be tested using a benchmark structural model, as well as experimental testing. The blast damage assessments will be validated using pressure-impulse (P-I) diagrams, which present the condition limits across blast parameters. Comparisons between P-I diagrams generated using the true system parameters and equivalent parameters will show the accuracy of the rapid condition based blast assessments.
GEOSPATIAL DATA ACCURACY ASSESSMENT
The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...
New technology in dietary assessment: a review of digital methods in improving food record accuracy.
Stumbo, Phyllis J
2013-02-01
Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.
NASA Astrophysics Data System (ADS)
Zhang, Wangfei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Zhao, Lei
2016-08-01
DEM Differential Method is an effective and efficient way for forest tree height assessment with Polarimetric and interferometric technology, however, the assessment accuracy of it is based on the accuracy of interferometric results and DEM. Terra-SAR/TanDEM-X, which established the first spaceborne bistatic interferometer, can provide highly accurate cross-track interferometric images in the whole global without inherent accuracy limitations like temporal decorrelation and atmospheric disturbance. These characters of Terra-SAR/TandDEM-X give great potential for global or regional tree height assessment, which have been constraint by the temporal decorrelation in traditional repeat-pass interferometry. Currently, in China, it will be costly to collect high accurate DEM with Lidar. At the same time, it is also difficult to get truly representative ground survey samples to test and verify the assessment results. In this paper, we analyzed the feasibility of using TerraSAR/TanDEM-X data to assess forest tree height with current free DEM data like ASTER-GDEM and archived ground in-suit data like forest management inventory data (FMI). At first, the accuracy and of ASTER-GDEM and forest management inventory data had been assessment according to the DEM and canopy height model (CHM) extracted from Lidar data. The results show the average elevation RMSE between ASTER-GEDM and Lidar-DEM is about 13 meters, but they have high correlation with the correlation coefficient of 0.96. With a linear regression model, we can compensate ASTER-GDEM and improve its accuracy nearly to the Lidar-DEM with same scale. The correlation coefficient between FMI and CHM is 0.40. its accuracy is able to be improved by a linear regression model withinconfidence intervals of 95%. After compensation of ASTER-GDEM and FMI, we calculated the tree height in Mengla test site with DEM Differential Method. The results showed that the corrected ASTER-GDEM can effectively improve the assessment accuracy. The average assessment accuracy before and after corrected is 0.73 and 0.76, the RMSE is 5.5 and 4.4, respectively.
ERIC Educational Resources Information Center
Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.
2016-01-01
Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…
A note on the accuracy of spectral method applied to nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Wong, Peter S.
1994-01-01
Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.
Teacher Compliance and Accuracy in State Assessment of Student Motor Skill Performance
ERIC Educational Resources Information Center
Hall, Tina J.; Hicklin, Lori K.; French, Karen E.
2015-01-01
Purpose: The purpose of this study was to investigate teacher compliance with state mandated assessment protocols and teacher accuracy in assessing student motor skill performance. Method: Middle school teachers (N = 116) submitted eighth grade student motor skill performance data from 318 physical education classes to a trained monitoring…
The Accuracy of Shock Capturing in Two Spatial Dimensions
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Casper, Jay H.
1997-01-01
An assessment of the accuracy of shock capturing schemes is made for two-dimensional steady flow around a cylindrical projectile. Both a linear fourth-order method and a nonlinear third-order method are used in this study. It is shown, contrary to conventional wisdom, that captured two-dimensional shocks are asymptotically first-order, regardless of the design accuracy of the numerical method. The practical implications of this finding are discussed in the context of the efficacy of high-order numerical methods for discontinuous flows.
Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Seki, Shinichiro; Sugimura, Kazuro
2014-03-01
The purpose of this article is to prospectively and directly compare the capabilities of non-contrast-enhanced MR angiography (MRA), 4D contrast-enhanced MRA, and contrast-enhanced MDCT for assessing pulmonary vasculature in patients with non-small cell lung cancer (NSCLC) before surgical treatment. A total of 77 consecutive patients (41 men and 36 women; mean age, 71 years) with pathologically proven and clinically assessed stage I NSCLC underwent thin-section contrast-enhanced MDCT, non-contrast-enhanced and contrast-enhanced MRA, and surgical treatment. The capability for anomaly assessment of the three methods was independently evaluated by two reviewers using a 5-point visual scoring system, and final assessment for each patient was made by consensus of the two readers. Interobserver agreement for pulmonary arterial and venous assessment was evaluated with the kappa statistic. Then, sensitivity, specificity, and accuracy for the detection of anomalies were directly compared among the three methods by use of the McNemar test. Interobserver agreement for pulmonary artery and vein assessment was substantial or almost perfect (κ=0.72-0.86). For pulmonary arterial and venous variation assessment, there were no significant differences in sensitivity, specificity, and accuracy among non-contrast-enhanced MRA (pulmonary arteries: sensitivity, 77.1%; specificity, 97.4%; accuracy, 87.7%; pulmonary veins: sensitivity, 50%; specificity, 98.5%; accuracy, 93.2%), 4D contrast-enhanced MRA (pulmonary arteries: sensitivity, 77.1%; specificity, 97.4%; accuracy, 87.7%; pulmonary veins: sensitivity, 62.5%; specificity, 100.0%; accuracy, 95.9%), and thin-section contrast-enhanced MDCT (pulmonary arteries: sensitivity, 91.4%; specificity, 89.5%; accuracy, 90.4%; pulmonary veins: sensitivity, 50%; specificity, 100.0%; accuracy, 95.9%) (p>0.05). Pulmonary vascular assessment of patients with NSCLC before surgical resection by non-contrast-enhanced MRA can be considered equivalent to that by 4D contrast-enhanced MRA and contrast-enhanced MDCT.
Hegde, Rahul J; Khare, Sumedh Suhas; Saraf, Tanvi A; Trivedi, Sonal; Naidu, Sonal
2015-01-01
Dental formation is superior to eruption as a method of dental age (DA) assessment. Eruption is only a brief occurrence, whereas formation may be related at different chronologic age levels, thereby providing a precise index for determining DA. The study was designed to determine the nature of inter-relationship between chronologic and DA. Age estimation depending upon tooth formation was done by Demirjian method and accuracy of Demirjian method was also evaluated. The sample for the study consisted of 197 children of Navi Mumbai. Significant positive correlation was found between chronologic age and DA that is, (r = 0.995), (P < 0.0001) for boys and (r = 0.995), (P < 0.0001) for girls. When age estimation was done by Demirjian method, mean the difference between true age (chronologic age) and assessed (DA) was 2 days for boys and 37 days for girls. Demirjian method showed high accuracy when applied to Navi Mumbai (Maharashtra - India) population. Demirjian method showed high accuracy when applied to Navi Mumbai (Maharashtra - India) population.
Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard
Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu
2011-01-01
Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155
EXhype: A tool for mineral classification using hyperspectral data
NASA Astrophysics Data System (ADS)
Adep, Ramesh Nityanand; shetty, Amba; Ramesh, H.
2017-02-01
Various supervised classification algorithms have been developed to classify earth surface features using hyperspectral data. Each algorithm is modelled based on different human expertises. However, the performance of conventional algorithms is not satisfactory to map especially the minerals in view of their typical spectral responses. This study introduces a new expert system named 'EXhype (Expert system for hyperspectral data classification)' to map minerals. The system incorporates human expertise at several stages of it's implementation: (i) to deal with intra-class variation; (ii) to identify absorption features; (iii) to discriminate spectra by considering absorption features, non-absorption features and by full spectra comparison; and (iv) finally takes a decision based on learning and by emphasizing most important features. It is developed using a knowledge base consisting of an Optimal Spectral Library, Segmented Upper Hull method, Spectral Angle Mapper (SAM) and Artificial Neural Network. The performance of the EXhype is compared with a traditional, most commonly used SAM algorithm using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data acquired over Cuprite, Nevada, USA. A virtual verification method is used to collect samples information for accuracy assessment. Further, a modified accuracy assessment method is used to get a real users accuracies in cases where only limited or desired classes are considered for classification. With the modified accuracy assessment method, SAM and EXhype yields an overall accuracy of 60.35% and 90.75% and the kappa coefficient of 0.51 and 0.89 respectively. It was also found that the virtual verification method allows to use most desired stratified random sampling method and eliminates all the difficulties associated with it. The experimental results show that EXhype is not only producing better accuracy compared to traditional SAM but, can also rightly classify the minerals. It is proficient in avoiding misclassification between target classes when applied on minerals.
Adjusted Clinical Groups: Predictive Accuracy for Medicaid Enrollees in Three States
Adams, E. Kathleen; Bronstein, Janet M.; Raskind-Hood, Cheryl
2002-01-01
Actuarial split-sample methods were used to assess predictive accuracy of adjusted clinical groups (ACGs) for Medicaid enrollees in Georgia, Mississippi (lagging in managed care penetration), and California. Accuracy for two non-random groups—high-cost and located in urban poor areas—was assessed. Measures for random groups were derived with and without short-term enrollees to assess the effect of turnover on predictive accuracy. ACGs improved predictive accuracy for high-cost conditions in all States, but did so only for those in Georgia's poorest urban areas. Higher and more unpredictable expenses of short-term enrollees moderated the predictive power of ACGs. This limitation was significant in Mississippi due in part, to that State's very high proportion of short-term enrollees. PMID:12545598
Sutherland, Andrew M; Parrella, Michael P
2011-08-01
Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.
Increasing accuracy in the assessment of motion sickness: A construct methodology
NASA Technical Reports Server (NTRS)
Stout, Cynthia S.; Cowings, Patricia S.
1993-01-01
The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.
NASA Astrophysics Data System (ADS)
Ilieva, Tamara; Gekov, Svetoslav
2017-04-01
The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.
Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads
Lu, Ming-Lun; Waters, Thomas; Werren, Dwight
2015-01-01
Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435
Hsieh, Chi-Wen; Liu, Tzu-Chiang; Wang, Jui-Kai; Jong, Tai-Lang; Tiu, Chui-Mei
2011-08-01
The Tanner-Whitehouse III (TW3) method is popular for assessing children's bone age, but it is time-consuming in clinical settings; to simplify this, a grouped-TW algorithm (GTA) was developed. A total of 534 left-hand roentgenograms of subjects aged 2-15 years, including 270 training and 264 testing datasets, were evaluated by a senior pediatrician. Next, GTA was used to choose the appropriate candidate of radius, ulna, and short bones and to classify the bones into three groups by data mining. Group 1 was composed of the maturity pattern of the radius and the middle phalange of the third and fifth digits and three weights were obtained by data mining, yielding a result similar to that of TW3. Subsequently, new bone-age assessment tables were constructed for boys and girls by linear regression and fuzzy logic. In addition, the Bland-Altman plot was utilized to compare accuracy between the GTA, the Greulich-Pyle (GP), and the TW3 method. The relative accuracy between the GTA and the TW3 was 96.2% in boys and 95% in girls, with an error of 1 year, while that between the assessment results of the GP and TW3 was about 87%, with an error of 1 year. However, even if the three weights were not optimally processed, GTA yielded a marginal result with an accuracy of 78.2% in boys and 79.6% in girls. GTA can efficiently simplify the complexity of the TW3 method, while maintaining almost the same accuracy. The relative accuracy between the assessment results of GTA and GP can also be marginal. © 2011 The Authors. Pediatrics International © 2011 Japan Pediatric Society.
Ferreira, Ana Paula A; Póvoa, Luciana C; Zanier, José F C; Ferreira, Arthur S
2017-02-01
The aim of this study was to assess the thorax-rib static method (TRSM), a palpation method for locating the seventh cervical spinous process (C7SP), and to report clinical data on the accuracy of this method and that of the neck flexion-extension method (FEM), using radiography as the gold standard. A single-blinded, cross-sectional diagnostic accuracy study was conducted. One hundred and one participants from a primary-to-tertiary health care center (63 men, 56 ± 17 years of age) had their neck palpated using the FEM and the TRSM. A single examiner performed both the FEM and TRSM in a random sequence. Radiopaque markers were placed at each location with the aid of an ultraviolet lamp. Participants underwent chest radiography for assessment of the superimposed inner body structure, which was located by using either the FEM or the TRSM. Accuracy in identifying the C7SP was 18% and 33% (P = .013) with use of the FEM and the TRSM, respectively. The cumulative accuracy considering both caudal and cephalic directions (C7SP ± 1SP) increased to 58% and 81% (P = .001) with use of the FEM and the TRSM, respectively. Age had a significant effect on the accuracy of FEM (P = .027) but not on the accuracy of TRSM (P = .939). Sex, body mass, body height, and body mass index had no significant effects on the accuracy of both the FEM (P = .209 or higher) and the TRSM (P = .265 or higher). The TRMS located the C7SP more accurately compared with the FEM at any given level of anatomic detail, although both still underperformed in terms of acceptable accuracy for a clinical setting. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
Van Duren, B H; Pandit, H; Beard, D J; Murray, D W; Gill, H S
2009-04-01
The recent development in Oxford lateral unicompartmental knee arthroplasty (UKA) design requires a valid method of assessing its kinematics. In particular, the use of single plane fluoroscopy to reconstruct the 3D kinematics of the implanted knee. The method has been used previously to investigate the kinematics of UKA, but mostly it has been used in conjunction with total knee arthroplasty (TKA). However, no accuracy assessment of the method when used for UKA has previously been reported. In this study we performed computer simulation tests to investigate the effect of the different geometry of the unicompartmental implant has on the accuracy of the method in comparison to the total knee implants. A phantom was built to perform in vitro tests to determine the accuracy of the method for UKA. The computer simulations suggested that the use of the method for UKA would prove less accurate than for TKA's. The rotational degrees of freedom for the femur showed greatest disparity between the UKA and TKA. The phantom tests showed that the in-plane translations were accurate to <0.5mm RMS and the out-of-plane translations were less accurate with 4.1mm RMS. The rotational accuracies were between 0.6 degrees and 2.3 degrees which are less accurate than those reported in the literature for TKA, however, the method is sufficient for studying overall knee kinematics.
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment
Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih
2015-01-01
In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911
Conceptual Scoring and Classification Accuracy of Vocabulary Testing in Bilingual Children
ERIC Educational Resources Information Center
Anaya, Jissel B.; Peña, Elizabeth D.; Bedore, Lisa M.
2018-01-01
Purpose: This study examined the effects of single-language and conceptual scoring on the vocabulary performance of bilingual children with and without specific language impairment. We assessed classification accuracy across 3 scoring methods. Method: Participants included Spanish-English bilingual children (N = 247) aged 5;1 (years;months) to…
DiLibero, Justin; O'Donoghue, Sharon C; DeSanto-Madeya, Susan; Felix, Janice; Ninobla, Annalyn; Woods, Allison
2016-01-01
Delirium occurs in up to 80% of intensive care unit (ICU) patients. Despite its prevalence in this population, there continues to be inaccuracies in delirium assessments. In the absence of accurate delirium assessments, delirium in critically ill ICU patients will remain unrecognized and will lead to negative clinical and organizational outcomes. The goal of this quality improvement project was to facilitate sustained improvement in the accuracy of delirium assessments among all ICU patients including those who were sedate or agitated. A pretest-posttest design was used to evaluate the effectiveness of a program to improve the accuracy of delirium screenings among patients admitted to a medical ICU or coronary care unit. Two hundred thirty-six delirium assessment audits were completed during the baseline period and 535 during the postintervention period. Compliance with performing at least 1 delirium assessment every shift was 85% at baseline and improved to 99% during the postintervention period. Baseline assessment accuracy was 70.31% among all patients and 53.49% among sedate and agitated patients. Postintervention assessment accuracy improved to 95.51% for all patients and 89.23% among sedate and agitated patients. The results from this project suggest the effectiveness of the program in improving assessment accuracy among difficult-to-assess patients. Further research is needed to demonstrate the effectiveness of this model across other critical care units, patient populations, and organizations.
Chandran, Deepa T; Jagger, Daryll C; Jagger, Robert G; Barbour, Michele E
2010-01-01
Dental impression materials are used to create an inverse replica of the dental hard and soft tissues, and are used in processes such as the fabrication of crowns and bridges. The accuracy and dimensional stability of impression materials are of paramount importance to the accuracy of fit of the resultant prosthesis. Conventional methods for assessing the dimensional stability of impression materials are two-dimensional (2D), and assess shrinkage or expansion between selected fixed points on the impression. In this study, dimensional changes in four impression materials were assessed using an established 2D and an experimental three-dimensional (3D) technique. The former involved measurement of the distance between reference points on the impression; the latter a contact scanning method for producing a computer map of the impression surface showing localised expansion, contraction and warpage. Dimensional changes were assessed as a function of storage times and moisture contamination comparable to that found in clinical situations. It was evident that dimensional changes observed using the 3D technique were not always apparent using the 2D technique, and that the former offers certain advantages in terms of assessing dimensional accuracy and predictability of impression methods. There are, however, drawbacks associated with 3D techniques such as the more time-consuming nature of the data acquisition and difficulty in statistically analysing the data.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
Emotion Recognition Ability: A Multimethod-Multitrait Study.
ERIC Educational Resources Information Center
Gaines, Margie; And Others
A common paradigm in measuring the ability to recognize facial expressions of emotion is to present photographs of facial expressions and to ask subjects to identify the emotion. The Affect Blend Test (ABT) uses this method of assessment and is scored for accuracy on specific affects as well as total accuracy. Another method of measuring affect…
Shermeyer, Jacob S.; Haack, Barry N.
2015-01-01
Two forestry-change detection methods are described, compared, and contrasted for estimating deforestation and growth in threatened forests in southern Peru from 2000 to 2010. The methods used in this study rely on freely available data, including atmospherically corrected Landsat 5 Thematic Mapper and Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation continuous fields (VCF). The two methods include a conventional supervised signature extraction method and a unique self-calibrating method called MODIS VCF guided forest/nonforest (FNF) masking. The process chain for each of these methods includes a threshold classification of MODIS VCF, training data or signature extraction, signature evaluation, k-nearest neighbor classification, analyst-guided reclassification, and postclassification image differencing to generate forest change maps. Comparisons of all methods were based on an accuracy assessment using 500 validation pixels. Results of this accuracy assessment indicate that FNF masking had a 5% higher overall accuracy and was superior to conventional supervised classification when estimating forest change. Both methods succeeded in classifying persistently forested and nonforested areas, and both had limitations when classifying forest change.
Diagnostic validity of methods for assessment of swallowing sounds: a systematic review.
Taveira, Karinna Veríssimo Meira; Santos, Rosane Sampaio; Leão, Bianca Lopes Cavalcante de; Neto, José Stechman; Pernambuco, Leandro; Silva, Letícia Korb da; De Luca Canto, Graziela; Porporatti, André Luís
2018-02-03
Oropharyngeal dysphagia is a highly prevalent comorbidity in neurological patients and presents a serious health threat, which may lead to outcomes of aspiration pneumonia, ranging from hospitalization to death. This assessment proposes a non-invasive, acoustic-based method to differentiate between individuals with and without signals of penetration and aspiration. This systematic review evaluated the diagnostic validity of different methods for assessment of swallowing sounds, when compared to Videofluroscopic of Swallowing Study (VFSS) to detect oropharyngeal dysphagia. Articles in which the primary objective was to evaluate the accuracy of swallowing sounds were searched in five electronic databases with no language or time limitations. Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v. 5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. The final electronic search revealed 554 records, however only 3 studies met the inclusion criteria. The accuracy values (area under the curve) were 0.94 for microphone, 0.80 for Doppler, and 0.60 for stethoscope. Based on limited evidence and low methodological quality because few studies were included, with a small sample size, from all index testes found for this systematic review, Doppler showed excellent diagnostic accuracy for the discrimination of swallowing sounds, whereas microphone-reported good accuracy discrimination of swallowing sounds of dysphagic patients and stethoscope showed best screening test. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear
Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.
2012-01-01
Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688
Sawchuk, Dena; Currie, Kris; Vich, Manuel Lagravere; Palomo, Juan Martin
2016-01-01
Objective To evaluate the accuracy and reliability of the diagnostic tools available for assessing maxillary transverse deficiencies. Methods An electronic search of three databases was performed from their date of establishment to April 2015, with manual searching of reference lists of relevant articles. Articles were considered for inclusion if they reported the accuracy or reliability of a diagnostic method or evaluation technique for maxillary transverse dimensions in mixed or permanent dentitions. Risk of bias was assessed in the included articles, using the Quality Assessment of Diagnostic Accuracy Studies tool-2. Results Nine articles were selected. The studies were heterogeneous, with moderate to low methodological quality, and all had a high risk of bias. Four suggested that the use of arch width prediction indices with dental cast measurements is unreliable for use in diagnosis. Frontal cephalograms derived from cone-beam computed tomography (CBCT) images were reportedly more reliable for assessing intermaxillary transverse discrepancies than posteroanterior cephalograms. Two studies proposed new three-dimensional transverse analyses with CBCT images that were reportedly reliable, but have not been validated for clinical sensitivity or specificity. No studies reported sensitivity, specificity, positive or negative predictive values or likelihood ratios, or ROC curves of the methods for the diagnosis of transverse deficiencies. Conclusions Current evidence does not enable solid conclusions to be drawn, owing to a lack of reliable high quality diagnostic studies evaluating maxillary transverse deficiencies. CBCT images are reportedly more reliable for diagnosis, but further validation is required to confirm CBCT's accuracy and diagnostic superiority. PMID:27668196
English Verb Accuracy of Bilingual Cantonese-English Preschoolers
ERIC Educational Resources Information Center
Rezzonico, Stefano; Goldberg, Ahuva; Milburn, Trelani; Belletti, Adriana; Girolametto, Luigi
2017-01-01
Purpose: Knowledge of verb development in typically developing bilingual preschoolers may inform clinicians about verb accuracy rates during the 1st 2 years of English instruction. This study aimed to investigate tensed verb accuracy in 2 assessment contexts in 4- and 5-year-old Cantonese-English bilingual preschoolers. Method: The sample included…
NASA Technical Reports Server (NTRS)
Smith, Marilyn J.; Lim, Joon W.; vanderWall, Berend G.; Baeder, James D.; Biedron, Robert T.; Boyd, D. Douglas, Jr.; Jayaraman, Buvana; Jung, Sung N.; Min, Byung-Young
2012-01-01
Over the past decade, there have been significant advancements in the accuracy of rotor aeroelastic simulations with the application of computational fluid dynamics methods coupled with computational structural dynamics codes (CFD/CSD). The HART II International Workshop database, which includes descent operating conditions with strong blade-vortex interactions (BVI), provides a unique opportunity to assess the ability of CFD/CSD to capture these physics. In addition to a baseline case with BVI, two additional cases with 3/rev higher harmonic blade root pitch control (HHC) are available for comparison. The collaboration during the workshop permits assessment of structured, unstructured, and hybrid overset CFD/CSD methods from across the globe on the dynamics, aerodynamics, and wake structure. Evaluation of the plethora of CFD/CSD methods indicate that the most important numerical variables associated with most accurately capturing BVI are a two-equation or detached eddy simulation (DES)-based turbulence model and a sufficiently small time step. An appropriate trade-off between grid fidelity and spatial accuracy schemes also appears to be pertinent for capturing BVI on the advancing rotor disk. Overall, the CFD/CSD methods generally fall within the same accuracy; cost-effective hybrid Navier-Stokes/Lagrangian wake methods provide accuracies within 50% the full CFD/CSD methods for most parameters of interest, except for those highly influenced by torsion. The importance of modeling the fuselage is observed, and other computational requirements are discussed.
Teaching acute care nurses cognitive assessment using LOCFAS: what's the best method?
Flannery, J; Land, K
2001-02-01
The Levels of Cognitive Functioning Assessment Scale (LOCFAS) is a behavioral checklist used by nurses in the acute care setting to assess the level of cognitive functioning in severely brain-injured patients in the early post-trauma period. Previous research studies have supported the reliability and validity of LOCFAS. For LOCFAS to become a more firmly established method of cognitive assessment, nurses must become familiar with and proficient in the use of this instrument. The purpose of this study was to find the most effective method of instruction by comparing three methods: a self-directed manual, a teaching video, and a classroom presentation. Videotaped vignettes of actual brain-injured patients were presented at the end of each training session, and participants were required to categorize these videotaped patients by using LOCFAS. High levels of reliability were observed for both the self-directed manual group and the teaching video group, but an overall lower level of reliability was observed for the classroom presentation group. Examination of the accuracy of overall LOCFAS ratings revealed a significant difference for instructional groups; the accuracy of the classroom presentation group was significantly lower than that of either the self-directed manual group or the teaching video group. The three instructional groups also differed on the average accuracy of ratings of the individual behaviors; the accuracy of the classroom presentation group was significantly lower than that of the teaching video group, whereas the self-directed manual group fell in between. Nurses also rated the instructional methods across a number of evaluative dimensions on a 5-point Likert-type scale. Evaluative statements ranged from average to good, with no significant differences among instructional methods.
Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data
ERIC Educational Resources Information Center
Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy
2016-01-01
Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…
NASA Astrophysics Data System (ADS)
Krzyżek, Robert; Przewięźlikowska, Anna
2017-12-01
When surveys of corners of building structures are carried out, surveyors frequently use a compilation of two surveying methods. The first one involves the determination of several corners with reference to a geodetic control using classical methods of surveying field details. The second method relates to the remaining corner points of a structure, which are determined in sequence from distance-distance intersection, using control linear values of the wall faces of the building, the so-called tie distances. This paper assesses the accuracy of coordinates of corner points of a building structure, determined using the method of distance-distance intersection, based on the corners which had previously been determined by the conducted surveys tied to a geodetic control. It should be noted, however, that such a method of surveying the corners of building structures from linear measures is based on the details of the first-order accuracy, while the regulations explicitly allow such measurement only for the details of the second- and third-order accuracy. Therefore, a question arises whether this legal provision is unfounded, or whether surveyors are acting not only against the applicable standards but also without due diligence while performing surveys? This study provides answers to the formulated problem. The main purpose of the study was to verify whether the actual method which is used in practice for surveying building structures allows to obtain the required accuracy of coordinates of the points being determined, or whether it should be strictly forbidden. The results of the conducted studies clearly demonstrate that the problem is definitely more complex. Eventually, however, it might be assumed that assessment of the accuracy in determining a location of corners of a building using a combination of two different surveying methods will meet the requirements of the regulation [MIA, 2011), subject to compliance with relevant baseline criteria, which have been presented in this study. Observance of the proposed boundary conditions would allow for frequent performance of surveys of building structures by surveyors (from tie distances), while maintaining the applicable accuracy criteria. This would allow for the inclusion of surveying documentation into the national geodetic and cartographic documentation center database pursuant to the legal bases.
ERIC Educational Resources Information Center
Zwitserlood, Rob; van Weerdenburg, Marjolijn; Verhoeven, Ludo; Wijnen, Frank
2015-01-01
Purpose: The purpose of this study was to identify the development of morphosyntactic accuracy and grammatical complexity in Dutch school-age children with specific language impairment (SLI). Method: Morphosyntactic accuracy, the use of dummy auxiliaries, and complex syntax were assessed using a narrative task that was administered at three points…
Real-time teleophthalmology versus face-to-face consultation: A systematic review.
Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W
2017-08-01
Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding
2013-01-01
Background In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. Results The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. Conclusions The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies.
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Ender, Andreas; Mehl, Albert
2014-01-01
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007
A Mobile Phone App for Dietary Intake Assessment in Adolescents: An Evaluation Study
Svensson, Åsa
2015-01-01
Background There is a great need for dietary assessment methods that suit the adolescent lifestyle and give valid intake data. Objective To develop a mobile phone app and evaluate its ability to assess energy intake (EI) and total energy expenditure (TEE) compared with objectively measured TEE. Furthermore, to investigate the impact of factors on reporting accuracy of EI, and to compare dietary intake with a Web-based method. Methods Participants 14 to 16 years of age were recruited from year nine in schools in Gothenburg, Sweden. In total, 81 adolescents used the mobile phone app over 1 to 6 days. TEE was measured with the SenseWear Armband (SWA) during the same or proximate days. Individual factors were assessed with a questionnaire. A total of 15 participants also recorded dietary intake using a Web-based method. Results The mobile phone app underestimated EI by 29% on a group level (P<.001) compared to TEE measured with the SWA, and there was no significant correlation between EI and TEE. Accuracy of EI relative to TEE increased with a weekend day in the record (P=.007) and lower BMI z-score (P=.001). TEE assessed with the mobile phone app was 1.19 times the value of TEE measured by the SWA on a group level (P<.001), and the correlation between the methods was .75 (P<.001). Analysis of physical activity levels (PAL) from the mobile phone app stratified by gender showed that accuracy of the mobile phone app was higher among boys. EI, nutrients, and food groups assessed with the mobile phone app and Web-based method among 15 participants were not significantly different and several were significantly correlated, but strong conclusions cannot be drawn due to the low number of participants. Conclusions By using a mobile phone dietary assessment app, on average 71% of adolescents’ EI was captured. The accuracy of reported dietary intake was higher with lower BMI z-score and if a weekend day was included in the record. The daily question in the mobile phone app about physical activity could accurately rank the participants’ TEE. PMID:26534783
Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.
2015-01-01
Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120
Achamrah, Najate; Jésus, Pierre; Grigioni, Sébastien; Rimbert, Agnès; Petit, André; Déchelotte, Pierre; Folope, Vanessa; Coëffier, Moïse
2018-01-01
Predictive equations have been specifically developed for obese patients to estimate resting energy expenditure (REE). Body composition (BC) assessment is needed for some of these equations. We assessed the impact of BC methods on the accuracy of specific predictive equations developed in obese patients. REE was measured (mREE) by indirect calorimetry and BC assessed by bioelectrical impedance analysis (BIA) and dual-energy X-ray absorptiometry (DXA). mREE, percentages of prediction accuracy (±10% of mREE) were compared. Predictive equations were studied in 2588 obese patients. Mean mREE was 1788 ± 6.3 kcal/24 h. Only the Müller (BIA) and Harris & Benedict (HB) equations provided REE with no difference from mREE. The Huang, Müller, Horie-Waitzberg, and HB formulas provided a higher accurate prediction (>60% of cases). The use of BIA provided better predictions of REE than DXA for the Huang and Müller equations. Inversely, the Horie-Waitzberg and Lazzer formulas provided a higher accuracy using DXA. Accuracy decreased when applied to patients with BMI ≥ 40, except for the Horie-Waitzberg and Lazzer (DXA) formulas. Müller equations based on BIA provided a marked improvement of REE prediction accuracy than equations not based on BC. The interest of BC to improve REE predictive equations accuracy in obese patients should be confirmed. PMID:29320432
Ceriotti, Ferruccio; Kaczmarek, Ewa; Guerra, Elena; Mastrantonio, Fabrizio; Lucarelli, Fausto; Valgimigli, Francesco; Mosca, Andrea
2015-03-01
Point-of-care (POC) testing devices for monitoring glucose and ketones can play a key role in the management of dysglycemia in hospitalized diabetes patients. The accuracy of glucose devices can be influenced by biochemical changes that commonly occur in critically ill hospital patients and by the medication prescribed. Little is known about the influence of these factors on ketone POC measurements. The aim of this study was to assess the analytical performance of POC hospital whole-blood glucose and ketone meters and the extent of glucose interference factors on the design and accuracy of ketone results. StatStrip glucose/ketone, Optium FreeStyle glucose/ketone, and Accu-Chek Performa glucose were also assessed and results compared to a central laboratory reference method. The analytical evaluation was performed according to Clinical and Laboratory Standards Institute (CLSI) protocols for precision, linearity, method comparison, and interference. The interferences assessed included acetoacetate, acetaminophen, ascorbic acid, galactose, maltose, uric acid, and sodium. The accuracies of both Optium ketone and glucose measurements were significantly influenced by varying levels of hematocrit and ascorbic acid. StatStrip ketone and glucose measurements were unaffected by the interferences tested with exception of ascorbic acid, which reduced the higher level ketone value. The accuracy of Accu-Chek glucose measurements was affected by hematocrit, by ascorbic acid, and significantly by galactose. The method correlation assessment indicated differences between the meters in compliance to ISO 15197 and CLSI 12-A3 performance criteria. Combined POC glucose/ketone methods are now available. The use of these devices in a hospital setting requires careful consideration with regard to the selection of instruments not sensitive to hematocrit variation and presence of interfering substances. © 2014 Diabetes Technology Society.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
NASA Astrophysics Data System (ADS)
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.
Hans-Erik Andersen; Tobey Clarkin; Ken Winterberger; Jacob Strunk
2009-01-01
The accuracy of recreational- and survey-grade global positioning system (GPS) receivers was evaluated across a range of forest conditions in the Tanana Valley of interior Alaska. High-accuracy check points, established using high-order instruments and closed-traverse surveying methods, were then used to evaluate the accuracy of positions acquired in different forest...
A methodology for probabilistic remaining creep life assessment of gas turbine components
NASA Astrophysics Data System (ADS)
Liu, Zhimin
Certain gas turbine components operate in harsh environments and various mechanisms may lead to component failure. It is common practice to use remaining life assessments to help operators schedule maintenance and component replacements. Creep is a major failure mechanisms that affect the remaining life assessment, and the resulting life consumption of a component is highly sensitive to variations in the material stresses and temperatures, which fluctuate significantly due to the changes in real operating conditions. In addition, variations in material properties and geometry will result in changes in creep life consumption rate. The traditional method used for remaining life assessment assumes a set of fixed operating conditions at all times, and it fails to capture the variations in operating conditions. This translates into a significant loss of accuracy and unnecessary high maintenance and replacement cost. A new method that captures these variations described above and improves the prediction accuracy of remaining life is developed. First, a metamodel is built to approximate the relationship between variables (operating conditions, material properties, geometry, etc.) and a creep response. The metamodel is developed using Response Surface Method/Design of Experiments methodology. Design of Experiments is an efficient sampling method, and for each sampling point a set of finite element analyses are used to compute the corresponding response value. Next, a low order polynomial Response Surface Equation (RSE) is used to fit these values. Four techniques are suggested to dramatically reduce computational effort, and to increase the accuracy of the RSE: smart meshing technique, automatic geometry parameterization, screening test and regional RSE refinement. The RSEs, along with a probabilistic method and a life fraction model are used to compute current damage accumulation and remaining life. By capturing the variations mentioned above, the new method results in much better accuracy than that available using the traditional method. After further development and proper verification the method should bring significant savings by reducing the number of inspections and deferring part replacement.
Assessing Smart Phones for Generating Life-space Indicators.
Wan, Neng; Qu, Wenyu; Whittington, Jackie; Witbrodt, Bradley C; Henderson, Mary Pearl; Goulding, Evan H; Schenk, A Katrin; Bonasera, Stephen J; Lin, Ge
2013-04-01
Life-space is a promising method for estimating older adults' functional status. However, traditional life-space measures are costly and time consuming because they often rely on active subject participation. This study assesses the feasibility of using the global positioning system (GPS) function of smart phones to generate life-space indicators. We first evaluated the location accuracy of smart phone collected GPS points versus those acquired by a commercial GPS unit. We then assessed the specificity of the smart phone processed life-space information against the traditional diary method. Our results suggested comparable location accuracy between the smart phone and the standard GPS unit in most outdoor situations. In addition, the smart phone method revealed more comprehensive life-space information than the diary method, which leads to higher and more consistent life-space scores. We conclude that the smart phone method is more reliable than traditional methods for measuring life-space. Further improvements will be required to develop a robust application of this method that is suitable for health-related practices.
Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart
2017-06-01
This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.
Local staging and assessment of colon cancer with 1.5-T magnetic resonance imaging
Blake, Helena; Jeyadevan, Nelesh; Abulafi, Muti; Swift, Ian; Toomey, Paul; Brown, Gina
2016-01-01
Objective: The aim of this study was to assess the accuracy of 1.5-T MRI in the pre-operative local T and N staging of colon cancer and identification of extramural vascular invasion (EMVI). Methods: Between 2010 and 2012, 60 patients with adenocarcinoma of the colon were prospectively recruited at 2 centres. 55 patients were included for final analysis. Patients received pre-operative 1.5-T MRI with high-resolution T2 weighted, gadolinium-enhanced T1 weighted and diffusion-weighted images. These were blindly assessed by two expert radiologists. Accuracy of the T-stage, N-stage and EMVI assessment was evaluated using post-operative histology as the gold standard. Results: Results are reported for two readers. Identification of T3 disease demonstrated an accuracy of 71% and 51%, sensitivity of 74% and 42% and specificity of 74% and 83%. Identification of N1 disease demonstrated an accuracy of 57% for both readers, sensitivity of 26% and 35% and specificity of 81% and 74%. Identification of EMVI demonstrated an accuracy of 74% and 69%, sensitivity 63% and 26% and specificity 80% and 91%. Conclusion: 1.5-T MRI achieved a moderate accuracy in the local evaluation of colon cancer, but cannot be recommended to replace CT on the basis of this study. Advances in knowledge: This study confirms that MRI is a viable alternative to CT for the local assessment of colon cancer, but this study does not reproduce the very high accuracy reported in the only other study to assess the accuracy of MRI in colon cancer staging. PMID:27226219
Boomerang: A method for recursive reclassification.
Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat
2016-09-01
While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.
Boomerang: A Method for Recursive Reclassification
Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat
2016-01-01
Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051
NASA Technical Reports Server (NTRS)
Card, Don H.; Strong, Laurence L.
1989-01-01
An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.
In Search of Grid Converged Solutions
NASA Technical Reports Server (NTRS)
Lockard, David P.
2010-01-01
Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.
Yokoo, Takeshi; Bydder, Mark; Hamilton, Gavin; Middleton, Michael S.; Gamst, Anthony C.; Wolfson, Tanya; Hassanein, Tarek; Patton, Heather M.; Lavine, Joel E.; Schwimmer, Jeffrey B.; Sirlin, Claude B.
2009-01-01
Purpose: To assess the accuracy of four fat quantification methods at low-flip-angle multiecho gradient-recalled-echo (GRE) magnetic resonance (MR) imaging in nonalcoholic fatty liver disease (NAFLD) by using MR spectroscopy as the reference standard. Materials and Methods: In this institutional review board–approved, HIPAA-compliant prospective study, 110 subjects (29 with biopsy-confirmed NAFLD, 50 overweight and at risk for NAFLD, and 31 healthy volunteers) (mean age, 32.6 years ± 15.6 [standard deviation]; range, 8–66 years) gave informed consent and underwent MR spectroscopy and GRE MR imaging of the liver. Spectroscopy involved a long repetition time (to suppress T1 effects) and multiple echo times (to estimate T2 effects); the reference fat fraction (FF) was calculated from T2-corrected fat and water spectral peak areas. Imaging involved a low flip angle (to suppress T1 effects) and multiple echo times (to estimate T2* effects); imaging FF was calculated by using four analysis methods of progressive complexity: dual echo, triple echo, multiecho, and multiinterference. All methods except dual echo corrected for T2* effects. The multiinterference method corrected for multiple spectral interference effects of fat. For each method, the accuracy for diagnosis of fatty liver, as defined with a spectroscopic threshold, was assessed by estimating sensitivity and specificity; fat-grading accuracy was assessed by comparing imaging and spectroscopic FF values by using linear regression. Results: Dual-echo, triple-echo, multiecho, and multiinterference methods had a sensitivity of 0.817, 0.967, 0.950, and 0.983 and a specificity of 1.000, 0.880, 1.000, and 0.880, respectively. On the basis of regression slope and intercept, the multiinterference (slope, 0.98; intercept, 0.91%) method had high fat-grading accuracy without statistically significant error (P > .05). Dual-echo (slope, 0.98; intercept, −2.90%), triple-echo (slope, 0.94; intercept, 1.42%), and multiecho (slope, 0.85; intercept, −0.15%) methods had statistically significant error (P < .05). Conclusion: Relaxation- and interference-corrected fat quantification at low-flip-angle multiecho GRE MR imaging provides high diagnostic and fat-grading accuracy in NAFLD. © RSNA, 2009 PMID:19221054
Assessment of forward head posture in females: observational and photogrammetry methods.
Salahzadeh, Zahra; Maroufi, Nader; Ahmadi, Amir; Behtash, Hamid; Razmjoo, Arash; Gohari, Mahmoud; Parnianpour, Mohamad
2014-01-01
There are different methods to assess forward head posture (FHP) but the accuracy and discrimination ability of these methods are not clear. Here, we want to compare three postural angles for FHP assessment and also study the discrimination accuracy of three photogrammetric methods to differentiate groups categorized based on observational method. All Seventy-eight healthy female participants (23 ± 2.63 years), were classified into three groups: moderate-severe FHP, slight FHP and non FHP based on observational postural assessment rules. Applying three photogrammetric methods - craniovertebral angle, head title angle and head position angle - to measure FHP objectively. One - way ANOVA test showed a significant difference in three categorized group's craniovertebral angle (P< 0.05, F=83.07). There was no dramatic difference in head tilt angle and head position angle methods in three groups. According to Linear Discriminate Analysis (LDA) results, the canonical discriminant function (Wilks'Lambda) was 0.311 for craniovertebral angle with 79.5% of cross-validated grouped cases correctly classified. Our results showed that, craniovertebral angle method may discriminate the females with moderate-severe and non FHP more accurate than head position angle and head tilt angle. The photogrammetric method had excellent inter and intra rater reliability to assess the head and cervical posture.
EGASP: the human ENCODE Genome Annotation Assessment Project
Guigó, Roderic; Flicek, Paul; Abril, Josep F; Reymond, Alexandre; Lagarde, Julien; Denoeud, France; Antonarakis, Stylianos; Ashburner, Michael; Bajic, Vladimir B; Birney, Ewan; Castelo, Robert; Eyras, Eduardo; Ucla, Catherine; Gingeras, Thomas R; Harrow, Jennifer; Hubbard, Tim; Lewis, Suzanna E; Reese, Martin G
2006-01-01
Background We present the results of EGASP, a community experiment to assess the state-of-the-art in genome annotation within the ENCODE regions, which span 1% of the human genome sequence. The experiment had two major goals: the assessment of the accuracy of computational methods to predict protein coding genes; and the overall assessment of the completeness of the current human genome annotations as represented in the ENCODE regions. For the computational prediction assessment, eighteen groups contributed gene predictions. We evaluated these submissions against each other based on a 'reference set' of annotations generated as part of the GENCODE project. These annotations were not available to the prediction groups prior to the submission deadline, so that their predictions were blind and an external advisory committee could perform a fair assessment. Results The best methods had at least one gene transcript correctly predicted for close to 70% of the annotated genes. Nevertheless, the multiple transcript accuracy, taking into account alternative splicing, reached only approximately 40% to 50% accuracy. At the coding nucleotide level, the best programs reached an accuracy of 90% in both sensitivity and specificity. Programs relying on mRNA and protein sequences were the most accurate in reproducing the manually curated annotations. Experimental validation shows that only a very small percentage (3.2%) of the selected 221 computationally predicted exons outside of the existing annotation could be verified. Conclusion This is the first such experiment in human DNA, and we have followed the standards established in a similar experiment, GASP1, in Drosophila melanogaster. We believe the results presented here contribute to the value of ongoing large-scale annotation projects and should guide further experimental methods when being scaled up to the entire human genome sequence. PMID:16925836
Handique, Bijoy K; Khan, Siraj A; Mahanta, J; Sudhakar, S
2014-09-01
Japanese encephalitis (JE) is one of the dreaded mosquito-borne viral diseases mostly prevalent in south Asian countries including India. Early warning of the disease in terms of disease intensity is crucial for taking adequate and appropriate intervention measures. The present study was carried out in Dibrugarh district in the state of Assam located in the northeastern region of India to assess the accuracy of selected forecasting methods based on historical morbidity patterns of JE incidence during the past 22 years (1985-2006). Four selected forecasting methods, viz. seasonal average (SA), seasonal adjustment with last three observations (SAT), modified method adjusting long-term and cyclic trend (MSAT), and autoregressive integrated moving average (ARIMA) have been employed to assess the accuracy of each of the forecasting methods. The forecasting methods were validated for five consecutive years from 2007-2012 and accuracy of each method has been assessed. The forecasting method utilising seasonal adjustment with long-term and cyclic trend emerged as best forecasting method among the four selected forecasting methods and outperformed the even statistically more advanced ARIMA method. Peak of the disease incidence could effectively be predicted with all the methods, but there are significant variations in magnitude of forecast errors among the selected methods. As expected, variation in forecasts at primary health centre (PHC) level is wide as compared to that of district level forecasts. The study showed that adopted forecasting techniques could reasonably forecast the intensity of JE cases at PHC level without considering the external variables. The results indicate that the understanding of long-term and cyclic trend of the disease intensity will improve the accuracy of the forecasts, but there is a need for making the forecast models more robust to explain sudden variation in the disease intensity with detail analysis of parasite and host population dynamics.
Survey methods for assessing land cover map accuracy
Nusser, S.M.; Klaas, E.E.
2003-01-01
The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.
Bazzi, Ali M; Rabaan, Ali A; El Edaily, Zeyad; John, Susan; Fawarah, Mahmoud M; Al-Tawfiq, Jaffar A
Matrix-assisted laser desorption-ionization time-of-flight (MALDI-TOF) mass spectrometry facilitates rapid and accurate identification of pathogens, which is critical for sepsis patients. In this study, we assessed the accuracy in identification of both Gram-negative and Gram-positive bacteria, except for Streptococcus viridans, using four rapid blood culture methods with Vitek MALDI-TOF-MS. We compared our proposed lysis centrifugation followed by washing and 30% acetic acid treatment method (method 2) with two other lysis centrifugation methods (washing and 30% formic acid treatment (method 1); 100% ethanol treatment (method 3)), and picking colonies from 90 to 180min subculture plates (method 4). Methods 1 and 2 identified all organisms down to species level with 100% accuracy, except for Streptococcus viridans, Streptococcus pyogenes, Enterobacter cloacae and Proteus vulgaris. The latter two were identified to genus level with 100% accuracy. Each method exhibited excellent accuracy and precision in terms of identification to genus level with certain limitations. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.
A Mobile Phone App for Dietary Intake Assessment in Adolescents: An Evaluation Study.
Svensson, Åsa; Larsson, Christel
2015-11-03
There is a great need for dietary assessment methods that suit the adolescent lifestyle and give valid intake data. To develop a mobile phone app and evaluate its ability to assess energy intake (EI) and total energy expenditure (TEE) compared with objectively measured TEE. Furthermore, to investigate the impact of factors on reporting accuracy of EI, and to compare dietary intake with a Web-based method. Participants 14 to 16 years of age were recruited from year nine in schools in Gothenburg, Sweden. In total, 81 adolescents used the mobile phone app over 1 to 6 days. TEE was measured with the SenseWear Armband (SWA) during the same or proximate days. Individual factors were assessed with a questionnaire. A total of 15 participants also recorded dietary intake using a Web-based method. The mobile phone app underestimated EI by 29% on a group level (P<.001) compared to TEE measured with the SWA, and there was no significant correlation between EI and TEE. Accuracy of EI relative to TEE increased with a weekend day in the record (P=.007) and lower BMI z-score (P=.001). TEE assessed with the mobile phone app was 1.19 times the value of TEE measured by the SWA on a group level (P<.001), and the correlation between the methods was .75 (P<.001). Analysis of physical activity levels (PAL) from the mobile phone app stratified by gender showed that accuracy of the mobile phone app was higher among boys. EI, nutrients, and food groups assessed with the mobile phone app and Web-based method among 15 participants were not significantly different and several were significantly correlated, but strong conclusions cannot be drawn due to the low number of participants. By using a mobile phone dietary assessment app, on average 71% of adolescents' EI was captured. The accuracy of reported dietary intake was higher with lower BMI z-score and if a weekend day was included in the record. The daily question in the mobile phone app about physical activity could accurately rank the participants' TEE.
Voskoboev, Nikolay V; Cambern, Sarah J; Hanley, Matthew M; Giesen, Callen D; Schilling, Jason J; Jannetto, Paul J; Lieske, John C; Block, Darci R
2015-11-01
Validation of tests performed on body fluids other than blood or urine can be challenging due to the lack of a reference method to confirm accuracy. The aim of this study was to evaluate alternate assessments of accuracy that laboratories can rely on to validate body fluid tests in the absence of a reference method using the example of sodium (Na(+)), potassium (K(+)), and magnesium (Mg(2+)) testing in stool fluid. Validations of fecal Na(+), K(+), and Mg(2+) were performed on the Roche cobas 6000 c501 (Roche Diagnostics) using residual stool specimens submitted for clinical testing. Spiked recovery, mixing studies, and serial dilutions were performed and % recovery of each analyte was calculated to assess accuracy. Results were confirmed by comparison to a reference method (ICP-OES, PerkinElmer). Mean recoveries for fecal electrolytes were Na(+) upon spiking=92%, mixing=104%, and dilution=105%; K(+) upon spiking=94%, mixing=96%, and dilution=100%; and Mg(2+) upon spiking=93%, mixing=98%, and dilution=100%. When autoanalyzer results were compared to reference ICP-OES results, Na(+) had a slope=0.94, intercept=4.1, and R(2)=0.99; K(+) had a slope=0.99, intercept=0.7, and R(2)=0.99; and Mg(2+) had a slope=0.91, intercept=-4.6, and R(2)=0.91. Calculated osmotic gap using both methods were highly correlated with slope=0.95, intercept=4.5, and R(2)=0.97. Acid pretreatment increased magnesium recovery from a subset of clinical specimens. A combination of mixing, spiking, and dilution recovery experiments are an acceptable surrogate for assessing accuracy in body fluid validations in the absence of a reference method. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Gillian, Jeffrey K.; Karl, Jason W.; Elaksher, Ahmed; Duniway, Michael C.
2017-01-01
Structure-from-motion (SfM) photogrammetry from unmanned aerial system (UAS) imagery is an emerging tool for repeat topographic surveying of dryland erosion. These methods are particularly appealing due to the ability to cover large landscapes compared to field methods and at reduced costs and finer spatial resolution compared to airborne laser scanning. Accuracy and precision of high-resolution digital terrain models (DTMs) derived from UAS imagery have been explored in many studies, typically by comparing image coordinates to surveyed check points or LiDAR datasets. In addition to traditional check points, this study compared 5 cm resolution DTMs derived from fixed-wing UAS imagery with a traditional ground-based method of measuring soil surface change called erosion bridges. We assessed accuracy by comparing the elevation values between DTMs and erosion bridges along thirty topographic transects each 6.1 m long. Comparisons occurred at two points in time (June 2014, February 2015) which enabled us to assess vertical accuracy with 3314 data points and vertical precision (i.e., repeatability) with 1657 data points. We found strong vertical agreement (accuracy) between the methods (RMSE 2.9 and 3.2 cm in June 2014 and February 2015, respectively) and high vertical precision for the DTMs (RMSE 2.8 cm). Our results from comparing SfM-generated DTMs to check points, and strong agreement with erosion bridge measurements suggests repeat UAS imagery and SfM processing could replace erosion bridges for a more synoptic landscape assessment of shifting soil surfaces for some studies. However, while collecting the UAS imagery and generating the SfM DTMs for this study was faster than collecting erosion bridge measurements, technical challenges related to the need for ground control networks and image processing requirements must be addressed before this technique could be applied effectively to large landscapes.
Mizinga, Kemmy M; Burnett, Thomas J; Brunelle, Sharon L; Wallace, Michael A; Coleman, Mark R
2018-05-01
The U.S. Department of Agriculture, Food Safety Inspection Service regulatory method for monensin, Chemistry Laboratory Guidebook CLG-MON, is a semiquantitative bioautographic method adopted in 1991. Official Method of AnalysisSM (OMA) 2011.24, a modern quantitative and confirmatory LC-tandem MS method, uses no chlorinated solvents and has several advantages, including ease of use, ready availability of reagents and materials, shorter run-time, and higher throughput than CLG-MON. Therefore, a bridging study was conducted to support the replacement of method CLG-MON with OMA 2011.24 for regulatory use. Using fortified bovine tissue samples, CLG-MON yielded accuracies of 80-120% in 44 of the 56 samples tested (one sample had no result, six samples had accuracies of >120%, and five samples had accuracies of 40-160%), but the semiquantitative nature of CLG-MON prevented assessment of precision, whereas OMA 2011.24 had accuracies of 88-110% and RSDr of 0.00-15.6%. Incurred residue results corroborated these results, demonstrating improved accuracy (83.3-114%) and good precision (RSDr of 2.6-20.5%) for OMA 2011.24 compared with CLG-MON (accuracy generally within 80-150%, with exceptions). Furthermore, χ2 analysis revealed no statistically significant difference between the two methods. Thus, the microbiological activity of monensin correlated with the determination of monensin A in bovine tissues, and OMA 2011.24 provided improved accuracy and precision over CLG-MON.
Assessing genomic selection prediction accuracy in a dynamic barley breeding
USDA-ARS?s Scientific Manuscript database
Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Abhijit; Voter, Arthur
2009-01-01
We develop a variation of the temperature accelerated dynamics (TAD) method, called the p-TAD method, that efficiently generates an on-the-fly kinetic Monte Carlo (KMC) process catalog with control over the accuracy of the catalog. It is assumed that transition state theory is valid. The p-TAD method guarantees that processes relevant at the timescales of interest to the simulation are present in the catalog with a chosen confidence. A confidence measure associated with the process catalog is derived. The dynamics is then studied using the process catalog with the KMC method. Effective accuracy of a p-TAD calculation is derived when amore » KMC catalog is reused for conditions different from those the catalog was originally generated for. Different KMC catalog generation strategies that exploit the features of the p-TAD method and ensure higher accuracy and/or computational efficiency are presented. The accuracy and the computational requirements of the p-TAD method are assessed. Comparisons to the original TAD method are made. As an example, we study dynamics in sub-monolayer Ag/Cu(110) at the time scale of seconds using the p-TAD method. It is demonstrated that the p-TAD method overcomes several challenges plaguing the conventional KMC method.« less
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Fifolt, Matthew; Blackburn, Justin; Rhodes, David J; Gillespie, Shemeka; Bennett, Aleena; Wolff, Paul; Rucks, Andrew
Historically, double data entry (DDE) has been considered the criterion standard for minimizing data entry errors. However, previous studies considered data entry alternatives through the limited lens of data accuracy. This study supplies information regarding data accuracy, operational efficiency, and cost for DDE and Optical Mark Recognition (OMR) for processing the Consumer Assessment of Healthcare Providers and Systems 5.0 survey. To assess data accuracy, we compared error rates for DDE and OMR by dividing the number of surveys that were arbitrated by the total number of surveys processed for each method. To assess operational efficiency, we tallied the cost of data entry for DDE and OMR after survey receipt. Costs were calculated on the basis of personnel, depreciation for capital equipment, and costs of noncapital equipment. The cost savings attributed to this method were negated by the operational efficiency of OMR. There was a statistical significance between rates of arbitration between DDE and OMR; however, this statistical significance did not create a practical significance. The potential benefits of DDE in terms of data accuracy did not outweigh the operational efficiency and thereby financial savings of OMR.
Carvalho, M A; Baranowski, T; Foster, E; Santos, O; Cardoso, B; Rito, A; Pereira Miguel, J
2015-12-01
Current methods for assessing children's dietary intake, such as interviewer-administered 24-h dietary recall (24-h DR), are time consuming and resource intensive. Self-administered instruments offer a low-cost diet assessment method for use with children. The present study assessed the validity of the Portuguese self-administered, computerised, 24-h DR (PAC24) against the observation of school lunch. Forty-one, 7-10-year-old children from two elementary schools, in Lisbon, were observed during school lunch followed by completion of the PAC24 the next day. Accuracy for reporting items was measured in terms of matches, intrusions and omissions; accuracy for reporting amounts was measured in terms of arithmetic and absolute differences for matches and amounts for omissions and intrusions; and accuracy for reporting items and amounts combined was measured in terms of total inaccuracy. The ratio of the estimated weight of food consumed with the actual weight consumed was calculated along with the limits of agreement using the method of Bland and Altman. Comparison of PAC24 against observations at the food level resulted in values of 67.0% for matches, 11.5% for intrusions and 21.5% for omissions. The mean for total inaccuracy was 3.44 servings. For amounts, accuracy was high for matches (-0.17 and 0.23 servings for arithmetic and absolute differences, respectively) and lower for omissions (0.61 servings) and intrusions (0.55 servings). PAC24 was found to under-estimate the weight of food on average by 32% of actual intake. PAC24 is a lower-burden procedure for both respondents and researchers and, with slight modification, comprises a promising method for assessing diet among children. © 2014 The British Dietetic Association Ltd.
NASA Astrophysics Data System (ADS)
Erener, A.
2013-04-01
Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all kinds of challenges, such as high dense build up areas, regions with bare soil, and small and large buildings with different rooftops, such as concrete, brick, and metal. Using the pixel based accuracy assessment it was shown that the percent building detection (PBD) and quality percent (QP) of the MLC and SVM depend on the complexity and texture variation of the region. Generally, PBD values range between 70% and 90% for the MLC and SVM, respectively. No substantial improvements were observed when the SVM and MLC classifications were developed by the addition of more variables, instead of the use of only four bands. In the evaluation of object based accuracy assessment, it was demonstrated that while MLC and SVM provide higher rates of correct detection, they also provide higher rates of false alarms.
Rivera-Sandoval, Javier; Monsalve, Timisay; Cattaneo, Cristina
2018-01-01
Studying bone collections with known data has proven to be useful in assessing reliability and accuracy of biological profile reconstruction methods used in Forensic Anthropology. Thus, it is necessary to calibrate these methods to clarify issues such as population variability and accuracy of estimations for the elderly. This work considers observations of morphological features examined by four innominate bone age assessment methods: (1) Suchey-Brooks Pubic Symphysis, (2) Lovejoy Iliac Auricular Surface, (3) Buckberry and Chamberlain Iliac Auricular Surface, and (4) Rouge-Maillart Iliac Auricular Surface and Acetabulum. This study conducted a blind test of a sample of 277 individuals from two contemporary skeletal collections from Universal and San Pedro cemeteries in Medellin, for which known pre-mortem data support the statistical analysis of results obtained using the four age assessment methods. Results from every method show tendency to increase bias and inaccuracy in relation to age, but Buckberry-Chamberlain and Rougé-Maillart's methods are the most precise for this particular Colombian population, where Buckberry-Chamberlain's is the best for analysis of older individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
Application of biomonitoring and support vector machine in water quality assessment*
Liao, Yue; Xu, Jian-yu; Wang, Zhu-wei
2012-01-01
The behavior of schools of zebrafish (Danio rerio) was studied in acute toxicity environments. Behavioral features were extracted and a method for water quality assessment using support vector machine (SVM) was developed. The behavioral parameters of fish were recorded and analyzed during one hour in an environment of a 24-h half-lethal concentration (LC50) of a pollutant. The data were used to develop a method to evaluate water quality, so as to give an early indication of toxicity. Four kinds of metal ions (Cu2+, Hg2+, Cr6+, and Cd2+) were used for toxicity testing. To enhance the efficiency and accuracy of assessment, a method combining SVM and a genetic algorithm (GA) was used. The results showed that the average prediction accuracy of the method was over 80% and the time cost was acceptable. The method gave satisfactory results for a variety of metal pollutants, demonstrating that this is an effective approach to the classification of water quality. PMID:22467374
Accuracy of the NDI Wave Speech Research System
ERIC Educational Resources Information Center
Berry, Jeffrey J.
2011-01-01
Purpose: This work provides a quantitative assessment of the positional tracking accuracy of the NDI Wave Speech Research System. Method: Three experiments were completed: (a) static rigid-body tracking across different locations in the electromagnetic field volume, (b) dynamic rigid-body tracking across different locations within the…
Konig, Alexandra; Satt, Aharon; Sorin, Alex; Hoory, Ran; Derreumaux, Alexandre; David, Renaud; Robert, Phillippe H
2018-01-01
Various types of dementia and Mild Cognitive Impairment (MCI) are manifested as irregularities in human speech and language, which have proven to be strong predictors for the disease presence and progress ion. Therefore, automatic speech analytics provided by a mobile application may be a useful tool in providing additional indicators for assessment and detection of early stage dementia and MCI. 165 participants (subjects with subjective cognitive impairment (SCI), MCI patients, Alzheimer's disease (AD) and mixed dementia (MD) patients) were recorded with a mobile application while performing several short vocal cognitive tasks during a regular consultation. These tasks included verbal fluency, picture description, counting down and a free speech task. The voice recordings were processed in two steps: in the first step, vocal markers were extracted using speech signal processing techniques; in the second, the vocal markers were tested to assess their 'power' to distinguish between SCI, MCI, AD and MD. The second step included training automatic classifiers for detecting MCI and AD, based on machine learning methods, and testing the detection accuracy. The fluency and free speech tasks obtain the highest accuracy rates of classifying AD vs. MD vs. MCI vs. SCI. Using the data, we demonstrated classification accuracy as follows: SCI vs. AD = 92% accuracy; SCI vs. MD = 92% accuracy; SCI vs. MCI = 86% accuracy and MCI vs. AD = 86%. Our results indicate the potential value of vocal analytics and the use of a mobile application for accurate automatic differentiation between SCI, MCI and AD. This tool can provide the clinician with meaningful information for assessment and monitoring of people with MCI and AD based on a non-invasive, simple and low-cost method. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Innovative Use of Blackboard[R] to Assess Laboratory Skills
ERIC Educational Resources Information Center
Epping, Ronald J.
2010-01-01
A novel application of the popular web instruction architecture Blackboard Academic Suite[R] is described. The method was applied to a large number of students to assess quantitatively the accuracy of each student's laboratory skills. The method provided immediate feedback to students on their personal skill level, replaced labour-intensive…
A new metric to assess temporal coherence for video retargeting
NASA Astrophysics Data System (ADS)
Li, Ke; Yan, Bo; Yuan, Binhang
2014-10-01
In video retargeting, how to assess the performance in maintaining temporal coherence has become the prominent challenge. In this paper, we will present a new objective measurement to assess temporal coherence after video retargeting. It's a general metric to assess jittery artifact for both discrete and continuous video retargeting methods, the accuracy of which is verified by psycho-visual tests. As a result, our proposed assessment method possesses huge practical significance.
Link, Manuela; Schmid, Christina; Pleus, Stefan; Baumstark, Annette; Rittmeyer, Delia; Haug, Cornelia; Freckmann, Guido
2015-04-14
The standard ISO (International Organization for Standardization) 15197 is widely accepted for the accuracy evaluation of systems for self-monitoring of blood glucose (SMBG). Accuracy evaluation was performed for 4 SMBG systems (Accu-Chek Aviva, ContourXT, GlucoCheck XL, GlucoMen LX PLUS) with 3 test strip lots each. To investigate a possible impact of the comparison method on system accuracy data, 2 different established methods were used. The evaluation was performed in a standardized manner following test procedures described in ISO 15197:2003 (section 7.3). System accuracy was assessed by applying ISO 15197:2003 and in addition ISO 15197:2013 criteria (section 6.3.3). For each system, comparison measurements were performed with a glucose oxidase (YSI 2300 STAT Plus glucose analyzer) and a hexokinase (cobas c111) method. All 4 systems fulfilled the accuracy requirements of ISO 15197:2003 with the tested lots. More stringent accuracy criteria of ISO 15197:2013 were fulfilled by 3 systems (Accu-Chek Aviva, ContourXT, GlucoMen LX PLUS) when compared to the manufacturer's comparison method and by 2 systems (Accu-Chek Aviva, ContourXT) when compared to the alternative comparison method. All systems showed lot-to-lot variability to a certain degree; 2 systems (Accu-Chek Aviva, ContourXT), however, showed only minimal differences in relative bias between the 3 evaluated lots. In this study, all 4 systems complied with the evaluated test strip lots with accuracy criteria of ISO 15197:2003. Applying ISO 15197:2013 accuracy limits, differences in the accuracy of the tested systems were observed, also demonstrating that the applied comparison method/system and the lot-to-lot variability can have a decisive influence on accuracy data obtained for a SMBG system. © 2015 Diabetes Technology Society.
NASA Technical Reports Server (NTRS)
R.Neigh, Christopher S.; Bolton, Douglas K.; Williams, Jennifer J.; Diabate, Mouhamad
2014-01-01
Forests are the largest aboveground sink for atmospheric carbon (C), and understanding how they change through time is critical to reduce our C-cycle uncertainties. We investigated a strong decline in Normalized Difference Vegetation Index (NDVI) from 1982 to 1991 in Pacific Northwest forests, observed with the National Ocean and Atmospheric Administration's (NOAA) series of Advanced Very High Resolution Radiometers (AVHRRs). To understand the causal factors of this decline, we evaluated an automated classification method developed for Landsat time series stacks (LTSS) to map forest change. This method included: (1) multiple disturbance index thresholds; and (2) a spectral trajectory-based image analysis with multiple confidence thresholds. We produced 48 maps and verified their accuracy with air photos, monitoring trends in burn severity data and insect aerial detection survey data. Area-based accuracy estimates for change in forest cover resulted in producer's and user's accuracies of 0.21 +/- 0.06 to 0.38 +/- 0.05 for insect disturbance, 0.23 +/- 0.07 to 1 +/- 0 for burned area and 0.74 +/- 0.03 to 0.76 +/- 0.03 for logging. We believe that accuracy was low for insect disturbance because air photo reference data were temporally sparse, hence missing some outbreaks, and the annual anniversary time step is not dense enough to track defoliation and progressive stand mortality. Producer's and user's accuracy for burned area was low due to the temporally abrupt nature of fire and harvest with a similar response of spectral indices between the disturbance index and normalized burn ratio. We conclude that the spectral trajectory approach also captures multi-year stress that could be caused by climate, acid deposition, pathogens, partial harvest, thinning, etc. Our study focused on understanding the transferability of previously successful methods to new ecosystems and found that this automated method does not perform with the same accuracy in Pacific Northwest forests. Using a robust accuracy assessment, we demonstrate the difficulty of transferring change attribution methods to other ecosystems, which has implications for the development of automated detection/attribution approaches. Widespread disturbance was found within AVHRR-negative anomalies, but identifying causal factors in LTSS with adequate mapping accuracy for fire and insects proved to be elusive. Our results provide a background framework for future studies to improve methods for the accuracy assessment of automated LTSS classifications.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... and related methodology. Emphasis will be placed on dataset accuracy and time-dependent biases. Pathways to overcome accuracy and bias issues will be an important focus. Participants will consider...] Guidance for improving these methods. [cir] Recommendations for rectifying any known time-dependent biases...
Saini, V.; Riekerink, R. G. M. Olde; McClure, J. T.; Barkema, H. W.
2011-01-01
Determining the accuracy and precision of a measuring instrument is pertinent in antimicrobial susceptibility testing. This study was conducted to predict the diagnostic accuracy of the Sensititre MIC mastitis panel (Sensititre) and agar disk diffusion (ADD) method with reference to the manual broth microdilution test method for antimicrobial resistance profiling of Escherichia coli (n = 156), Staphylococcus aureus (n = 154), streptococcal (n = 116), and enterococcal (n = 31) bovine clinical mastitis isolates. The activities of ampicillin, ceftiofur, cephalothin, erythromycin, oxacillin, penicillin, the penicillin-novobiocin combination, pirlimycin, and tetracycline were tested against the isolates. Diagnostic accuracy was determined by estimating the area under the receiver operating characteristic curve; intertest essential and categorical agreements were determined as well. Sensititre and the ADD method demonstrated moderate to highly accurate (71 to 99%) and moderate to perfect (71 to 100%) predictive accuracies for 74 and 76% of the isolate-antimicrobial MIC combinations, respectively. However, the diagnostic accuracy was low for S. aureus-ceftiofur/oxacillin combinations and other streptococcus-ampicillin combinations by either testing method. Essential agreement between Sensititre automatic MIC readings and MIC readings obtained by the broth microdilution test method was 87%. Essential agreement between Sensititre automatic and manual MIC reading methods was 97%. Furthermore, the ADD test method and Sensititre MIC method exhibited 92 and 91% categorical agreement (sensitive, intermediate, resistant) of results, respectively, compared with the reference method. However, both methods demonstrated lower agreement for E. coli-ampicillin/cephalothin combinations than for Gram-positive isolates. In conclusion, the Sensititre and ADD methods had moderate to high diagnostic accuracy and very good essential and categorical agreement for most udder pathogen-antimicrobial combinations and can be readily employed in veterinary diagnostic laboratories. PMID:21270215
A time series modeling approach in risk appraisal of violent and sexual recidivism.
Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E
2010-10-01
For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.
Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A
2017-08-01
For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
The Accuracy of Anthropometric Equations to Assess Body Fat in Adults with Down Syndrome
ERIC Educational Resources Information Center
Rossato, Mateus; Dellagrana, Rodolfo André; da Costa, Rafael Martins; de Souza Bezerra, Ewertton; dos Santos, João Otacílio Libardoni; Rech, Cassiano Ricardo
2018-01-01
Background: The aim of this study was to verify the accuracy of anthropometric equations to estimate the body density (BD) of adults with Down syndrome (DS), and propose new regression equations. Materials and methods: Twenty-one males (30.5 ± 9.4 years) and 17 females (27.3 ± 7.7 years) with DS participated in this study. The reference method for…
A scoping review of malaria forecasting: past work and future directions
Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L
2012-01-01
Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505
Boers, A M; Marquering, H A; Jochem, J J; Besselink, N J; Berkhemer, O A; van der Lugt, A; Beenen, L F; Majoie, C B
2013-08-01
Cerebral infarct volume as observed in follow-up CT is an important radiologic outcome measure of the effectiveness of treatment of patients with acute ischemic stroke. However, manual measurement of CIV is time-consuming and operator-dependent. The purpose of this study was to develop and evaluate a robust automated measurement of the CIV. The CIV in early follow-up CT images of 34 consecutive patients with acute ischemic stroke was segmented with an automated intensity-based region-growing algorithm, which includes partial volume effect correction near the skull, midline determination, and ventricle and hemorrhage exclusion. Two observers manually delineated the CIV. Interobserver variability of the manual assessments and the accuracy of the automated method were evaluated by using the Pearson correlation, Bland-Altman analysis, and Dice coefficients. The accuracy was defined as the correlation with the manual assessment as a reference standard. The Pearson correlation for the automated method compared with the reference standard was similar to the manual correlation (R = 0.98). The accuracy of the automated method was excellent with a mean difference of 0.5 mL with limits of agreement of -38.0-39.1 mL, which were more consistent than the interobserver variability of the 2 observers (-40.9-44.1 mL). However, the Dice coefficients were higher for the manual delineation. The automated method showed a strong correlation and accuracy with the manual reference measurement. This approach has the potential to become the standard in assessing the infarct volume as a secondary outcome measure for evaluating the effectiveness of treatment.
Effects of a rater training on rating accuracy in a physical examination skills assessment
Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R.
2014-01-01
Background: The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Methods: Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Results: Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. Conclusions: While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters’ grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not effectively adjust the physicians' judgement on medical students in real-live assessments. PMID:25489341
Subjective global assessment of nutritional status in children.
Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool
2010-10-01
This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.
Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck
2014-05-01
Modeling complex vibroacoustic systems including poroelastic materials using finite element based methods can be unfeasible for practical applications. For this reason, analytical approaches such as the transfer matrix method are often preferred to obtain a quick estimation of the vibroacoustic parameters. However, the strong assumptions inherent within the transfer matrix method lead to a lack of accuracy in the description of the geometry of the system. As a result, the transfer matrix method is inherently limited to the high frequency range. Nowadays, hybrid substructuring procedures have become quite popular. Indeed, different modeling techniques are typically sought to describe complex vibroacoustic systems over the widest possible frequency range. As a result, the flexibility and accuracy of the finite element method and the efficiency of the transfer matrix method could be coupled in a hybrid technique to obtain a reduction of the computational burden. In this work, a hybrid methodology is proposed. The performances of the method in predicting the vibroacoutic indicators of flat structures with attached homogeneous acoustic treatments are assessed. The results prove that, under certain conditions, the hybrid model allows for a reduction of the computational effort while preserving enough accuracy with respect to the full finite element solution.
A call for benchmarking transposable element annotation methods.
Hoen, Douglas R; Hickey, Glenn; Bourque, Guillaume; Casacuberta, Josep; Cordaux, Richard; Feschotte, Cédric; Fiston-Lavier, Anna-Sophie; Hua-Van, Aurélie; Hubley, Robert; Kapusta, Aurélie; Lerat, Emmanuelle; Maumus, Florian; Pollock, David D; Quesneville, Hadi; Smit, Arian; Wheeler, Travis J; Bureau, Thomas E; Blanchette, Mathieu
2015-01-01
DNA derived from transposable elements (TEs) constitutes large parts of the genomes of complex eukaryotes, with major impacts not only on genomic research but also on how organisms evolve and function. Although a variety of methods and tools have been developed to detect and annotate TEs, there are as yet no standard benchmarks-that is, no standard way to measure or compare their accuracy. This lack of accuracy assessment calls into question conclusions from a wide range of research that depends explicitly or implicitly on TE annotation. In the absence of standard benchmarks, toolmakers are impeded in improving their tools, annotators cannot properly assess which tools might best suit their needs, and downstream researchers cannot judge how accuracy limitations might impact their studies. We therefore propose that the TE research community create and adopt standard TE annotation benchmarks, and we call for other researchers to join the authors in making this long-overdue effort a success.
Assessment of Automated Measurement and Verification (M&V) Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Custodio, Claudine
This report documents the application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-building energy savings.
Christiaens, Véronique; De Bruyn, Hugo; Thevissen, Eric; Koole, Sebastiaan; Dierens, Melissa; Cosyn, Jan
2018-01-01
The accuracy of analogue and especially digital intra-oral radiography in assessing interdental bone level needs further documentation. The aim of this study was to compare clinical and radiographic bone level assessment to intra-surgical bone level registration (1) and to identify the clinical variables rendering interdental bone level assessment inaccurate (2). The study sample included 49 interdental sites in 17 periodontitis patients. Evaluation methods included vertical relative probing attachment level (RAL-V), analogue and digital intra-oral radiography and bone sounding without and with flap elevation. The latter was considered the true bone level. Five examiners evaluated all radiographs. Significant underestimation of the true bone level was observed for all evaluation methods pointing to 2.7 mm on average for analogue radiography, 2.5 mm for digital radiography, 1.8 mm for RAL-V and 0.6 mm for bone sounding without flap elevation (p < 0.001). Radiographic underestimation of the true bone level was higher in the (pre)molar region (p ≤ 0.047) and increased with defect depth (p < 0.001). Variation between clinicians was huge (range analogue radiography 2.2-3.2 mm; range digital radiography 2.1-3.0 mm). All evaluation methods significantly underestimated the true bone level. Bone sounding was most accurate, whereas intra-oral radiographs were least accurate. Deep periodontal defects in the (pre)molar region were most underrated by intra-oral radiography. Bone sounding had the highest accuracy in assessing interdental bone level.
Heart Rate Assessment Immediately after Birth.
Phillipos, Emily; Solevåg, Anne Lee; Pichler, Gerhard; Aziz, Khalid; van Os, Sylvia; O'Reilly, Megan; Cheung, Po-Yin; Schmölzer, Georg M
2016-01-01
Heart rate assessment immediately after birth in newborn infants is critical to the correct guidance of resuscitation efforts. There are disagreements as to the best method to measure heart rate. The aim of this study was to assess different methods of heart rate assessment in newborn infants at birth to determine the fastest and most accurate method. PubMed, EMBASE and Google Scholar were systematically searched using the following terms: 'infant', 'heart rate', 'monitoring', 'delivery room', 'resuscitation', 'stethoscope', 'auscultation', 'palpation', 'pulse oximetry', 'electrocardiogram', 'Doppler ultrasound', 'photoplethysmography' and 'wearable sensors'. Eighteen studies were identified that described various methods of heart rate assessment in newborn infants immediately after birth. Studies examining auscultation, palpation, pulse oximetry, electrocardiography and Doppler ultrasound as ways to measure heart rate were included. Heart rate measurements by pulse oximetry are superior to auscultation and palpation, but there is contradictory evidence about its accuracy depending on whether the sensor is connected to the infant or the oximeter first. Several studies indicate that electrocardiogram provides a reliable heart rate faster than pulse oximetry. Doppler ultrasound shows potential for clinical use, however future evidence is needed to support this conclusion. Heart rate assessment is important and there are many measurement methods. The accuracy of routinely applied methods varies, with palpation and auscultation being the least accurate and electrocardiogram being the most accurate. More research is needed on Doppler ultrasound before its clinical use. © 2015 S. Karger AG, Basel.
Fan, Yong; Du, Jin Peng; Liu, Ji Jun; Zhang, Jia Nan; Qiao, Huan Huan; Liu, Shi Chang; Hao, Ding Jun
2018-06-01
A miniature spine-mounted robot has recently been introduced to further improve the accuracy of pedicle screw placement in spine surgery. However, the differences in accuracy between the robotic-assisted (RA) technique and the free-hand with fluoroscopy-guided (FH) method for pedicle screw placement are controversial. A meta-analysis was conducted to focus on this problem. Several randomized controlled trials (RCTs) and cohort studies involving RA and FH and published before January 2017 were searched for using the Cochrane Library, Ovid, Web of Science, PubMed, and EMBASE databases. A total of 55 papers were selected. After the full-text assessment, 45 clinical trials were excluded. The final meta-analysis included 10 articles. The accuracy of pedicle screw placement within the RA group was significantly greater than the accuracy within the FH group (odds ratio 95%, "perfect accuracy" confidence interval: 1.38-2.07, P < .01; odds ratio 95% "clinically acceptable" Confidence Interval: 1.17-2.08, P < .01). There are significant differences in accuracy between RA surgery and FH surgery. It was demonstrated that the RA technique is superior to the conventional method in terms of the accuracy of pedicle screw placement.
Baumstark, Annette; Jendrike, Nina; Pleus, Stefan; Haug, Cornelia; Freckmann, Guido
2017-10-01
Self-monitoring of blood glucose (BG) is an essential part of diabetes therapy. Accurate and reliable results from BG monitoring systems (BGMS) are important especially when they are used to calculate insulin doses. This study aimed at assessing system accuracy of BGMS and possibly related insulin dosing errors. System accuracy of six different BGMS (Accu-Chek ® Aviva Nano, Accu-Chek Mobile, Accu-Chek Performa Nano, CONTOUR ® NEXT LINK 2.4, FreeStyle Lite, OneTouch ® Verio ® IQ) was assessed in comparison to a glucose oxidase and a hexokinase method. Study procedures and analysis were based on ISO 15197:2013/EN ISO 15197:2015, clause 6.3. In addition, insulin dosing error was modeled. In the comparison against the glucose oxidase method, five out of six BGMS fulfilled ISO 15197:2013 accuracy criteria. Up to 14.3%/4.3%/0.3% of modeled doses resulted in errors exceeding ±0.5/±1.0/±1.5 U and missing the modeled target by 20 mg/dL/40 mg/dL/60 mg/dL, respectively. Compared against the hexokinase method, five out of six BGMS fulfilled ISO 15197:2013 accuracy criteria. Up to 25.0%/10.5%/3.2% of modeled doses resulted in errors exceeding ±0.5/±1.0/±1.5 U, respectively. Differences in system accuracy were found, even among BGMS that fulfilled the minimum system accuracy criteria of ISO 15197:2013. In the error model, considerable insulin dosing errors resulted for some of the investigated systems. Diabetes patients on insulin therapy should be able to rely on their BGMS' readings; therefore, they require highly accurate BGMS, in particular, when making therapeutic decisions.
Energy-Based Metrics for Arthroscopic Skills Assessment.
Poursartip, Behnaz; LeBel, Marie-Eve; McCracken, Laura C; Escoto, Abelardo; Patel, Rajni V; Naish, Michael D; Trejos, Ana Luisa
2017-08-05
Minimally invasive skills assessment methods are essential in developing efficient surgical simulators and implementing consistent skills evaluation. Although numerous methods have been investigated in the literature, there is still a need to further improve the accuracy of surgical skills assessment. Energy expenditure can be an indication of motor skills proficiency. The goals of this study are to develop objective metrics based on energy expenditure, normalize these metrics, and investigate classifying trainees using these metrics. To this end, different forms of energy consisting of mechanical energy and work were considered and their values were divided by the related value of an ideal performance to develop normalized metrics. These metrics were used as inputs for various machine learning algorithms including support vector machines (SVM) and neural networks (NNs) for classification. The accuracy of the combination of the normalized energy-based metrics with these classifiers was evaluated through a leave-one-subject-out cross-validation. The proposed method was validated using 26 subjects at two experience levels (novices and experts) in three arthroscopic tasks. The results showed that there are statistically significant differences between novices and experts for almost all of the normalized energy-based metrics. The accuracy of classification using SVM and NN methods was between 70% and 95% for the various tasks. The results show that the normalized energy-based metrics and their combination with SVM and NN classifiers are capable of providing accurate classification of trainees. The assessment method proposed in this study can enhance surgical training by providing appropriate feedback to trainees about their level of expertise and can be used in the evaluation of proficiency.
Using Unmanned Helicopters to Assess Vegetation Cover in Sagebrush Steppe Ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert P. Breckenridge; Maxine Dakins; Stephen Bunting
2012-07-01
Evaluating vegetation cover is an important factor in understanding the sustainability of many ecosystems. Methods that have sufficient accuracy and improved cost efficiency could dramatically alter how biotic resources are monitored on both public and private lands. This will be of interest to land managers because there are rarely enough resource specialists or funds available for comprehensive ground evaluations. In this project, unmanned helicopters were used to collect still-frame imagery to assess vegetation cover during May, June, and July in 2005. The images were used to estimate percent cover for six vegetative cover classes (shrub, dead shrub, grass, forbs, litter,more » and bare ground). The field plots were located on the INL site west of Idaho Falls, Idaho. Ocular assessments of digital imagery were performed using a software program called SamplePoint, and the results were compared against field measurements collected using a point-frame method to assess accuracy. The helicopter imagery evaluation showed a high degree of agreement with field cover class values for litter, bare ground, and grass, and reasonable agreement for dead shrubs. Shrub cover was often overestimated and forbs were generally underestimated. The helicopter method took 45% less time than the field method to set plots and collect and analyze data. This study demonstrates that UAV technology provides a viable method for monitoring vegetative cover on rangelands in less time and with lower costs. Tradeoffs between cost and accuracy are critical management decisions that are important when managing vegetative conditions across vast sagebrush ecosystems throughout the Intermountain West.« less
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan
2014-01-01
Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823
Making and Measuring a Model of a Salt Marsh
ERIC Educational Resources Information Center
Fogleman, Tara; Curran, Mary Carla
2007-01-01
Students are often confused by the difference between the terms "accuracy" and "precision." In the following activities, students explore the definitions of accuracy and precision while learning about salt march ecology and the methods used by scientists to assess salt marsh health. The activities also address the concept that the ocean supports a…
Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method
NASA Astrophysics Data System (ADS)
Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio
Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV since only the HOV is implemented. The software comparison guaranteed about the overall correctness and good performances of the SISAR model, whereas the results showed the good features of the LOOCV method.
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Followill, D; Howell, R
2015-06-15
Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less
Objective Assessment of Patient Inhaler User Technique Using an Audio-Based Classification Approach.
Taylor, Terence E; Zigel, Yaniv; Egan, Clarice; Hughes, Fintan; Costello, Richard W; Reilly, Richard B
2018-02-01
Many patients make critical user technique errors when using pressurised metered dose inhalers (pMDIs) which reduce the clinical efficacy of respiratory medication. Such critical errors include poor actuation coordination (poor timing of medication release during inhalation) and inhaling too fast (peak inspiratory flow rate over 90 L/min). Here, we present a novel audio-based method that objectively assesses patient pMDI user technique. The Inhaler Compliance Assessment device was employed to record inhaler audio signals from 62 respiratory patients as they used a pMDI with an In-Check Flo-Tone device attached to the inhaler mouthpiece. Using a quadratic discriminant analysis approach, the audio-based method generated a total frame-by-frame accuracy of 88.2% in classifying sound events (actuation, inhalation and exhalation). The audio-based method estimated the peak inspiratory flow rate and volume of inhalations with an accuracy of 88.2% and 83.94% respectively. It was detected that 89% of patients made at least one critical user technique error even after tuition from an expert clinical reviewer. This method provides a more clinically accurate assessment of patient inhaler user technique than standard checklist methods.
An integrated use of topography with RSI in gully mapping, Shandong Peninsula, China.
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed.
An Integrated Use of Topography with RSI in Gully Mapping, Shandong Peninsula, China
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed. PMID:25302333
Quantitative data standardization of X-ray based densitometry methods
NASA Astrophysics Data System (ADS)
Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.
2018-02-01
In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.
Efficacy of "Dimodent" sex predictive equation assessed in an Indian population.
Bharti, A; Angadi, P V; Kale, A D; Hallikerimath, S R
2011-07-01
Teeth are considered as a useful adjunct for sex assessment and may play an important role in constructing a post-mortem profile. The Dimodent method is based on the high degree of sex discrimination obtained with the mandibular canine and the high correlation coefficients between mandibular canine and lateral incisor mesiodistal (MD) and buccolingual (BL) dimensions. This has been evaluated in the French and Lebanese, but no study exists on its efficacy in Indians. Here, we have applied the 'Dimodent' equation on an Indian sample (100 males, 100 females; age range of 19-27yrs). Additionally, a population-specific Dimodent equation was derived using logistic regression analysis and applied to our sample. Also, the sex determination potential of MD and BL measurements of mandibular lateral incisors and canines, individually, was assessed. We found a poor sex assessment accuracy using the Dimodent equation of Fronty (34.5%) in our Indian sample, but the populationspecific Dimodent equation gave a better accuracy (72%).Thus, it appears that sexual dimorphism in teeth is population-specific; consequently the Dimodent equation has to be derived individually in different populations for use in sex assessment. The mesiodistal measurement of the mandibular canine alone gave a marginally higher accuracy (72.5%); therefore, we suggest the use of mandibular canines alone rather than the Dimodent method.
Wiebking, Ulrich; Pacha, Tarek Omar; Jagodzinski, Michael
2015-03-01
Ankle sprain injuries, often due to lateral ligamentous injury, are the most common sports traumatology conditions. Correct diagnoses require an understanding of the assessment tools with a high degree of diagnostic accuracy. Obviously, there are still no clear consensuses or standard methods to differentiate between a ligament tear and an ankle sprain. In addition to clinical assessments, stress sonography, arthrometer and other methods are often performed simultaneously. These methods are often costly, however, and their accuracy is controversial. The aim of this study was to investigate three different measurement tools that can be used after a lateral ligament lesion of the ankle with injury of the anterior talofibular ligament to determine their diagnostic accuracy. Thirty patients were recruited for this study. The mean patient age was 35±14 years. There were 15 patients with a ligamentous rupture and 15 patients with an ankle sprain. We quantified two devices and one clinical assessment by which we calculated the sensitivity and specifity: Stress sonography according to Hoffmann, an arthrometer to investigate the 100N talar drawer and maximum manual testing and the clinical assessment of the anterior drawer test. A high resolution sonography was used as the gold standard. The ultrasound-assisted gadgetry according to Hoffmann, with a 3mm cut-off value, displayed a sensitivity of 0.27 and a specificity of 0.87. Using a 3.95mm cut-off value, the arthrometer displayed a sensitivity of 0.8 and a specificity of 0.4. The clinical investigation sensitivities and specificities were 0.93 and 0.67, respectively. Different assessment methods for ankle rupture diagnoses are suggested in the literature; however, these methods lack reliable data to set investigation standards. Clinical examination under adequate analgesia seems to remains the most reliable tool to investigate ligamentous ankle lesions. Further clinical studies with higher case numbers are necessary, however, to evaluate these findings and to measure the reliability. Copyright © 2014 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
Video and accelerometer-based motion analysis for automated surgical skills assessment.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan
2018-03-01
Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.
2011-01-01
Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346
Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis
2007-12-01
Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.
Detection of the spatial accuracy of an O-arm in the region of surgical interest
NASA Astrophysics Data System (ADS)
Koivukangas, Tapani; Katisko, Jani P. A.; Koivukangsa, John P.
2013-03-01
Medical imaging is an essential component of a wide range of surgical procedures1. For image guided surgical (IGS) procedures, medical images are the main source of information2. The IGS procedures rely largely on obtained image data, so the data needs to provide differentiation between normal and abnormal tissues, especially when other surgical guidance devices are used in the procedures. The image data also needs to provide accurate spatial representation of the patient3. This research has concentrated on the concept of accuracy assessment of IGS devices to meet the needs of quality assurance in the hospital environment. For this purpose, two precision engineered accuracy assessment phantoms have been developed as advanced materials and methods for the community. The phantoms were designed to mimic the volume of a human head as the common region of surgical interest (ROSI). This paper introduces the utilization of the phantoms in spatial accuracy assessment of a commercial surgical 3D CT scanner, the O-Arm. The study presents methods and results of image quality detection of possible geometrical distortions in the region of surgical interest. The results show that in the pre-determined ROSI there are clear image distortion and artefacts using too high imaging parameters when scanning the objects. On the other hand, when using optimal parameters, the O-Arm causes minimal error in IGS accuracy. The detected spatial inaccuracy of the O-Arm with used parameters was in the range of less than 1.00 mm.
Rochefort, Christian M; Buckeridge, David L; Tanguay, Andréanne; Biron, Alain; D'Aragon, Frédérick; Wang, Shengrui; Gallix, Benoit; Valiquette, Louis; Audet, Li-Anne; Lee, Todd C; Jayaraman, Dev; Petrucci, Bruno; Lefebvre, Patricia
2017-02-16
Adverse events (AEs) in acute care hospitals are frequent and associated with significant morbidity, mortality, and costs. Measuring AEs is necessary for quality improvement and benchmarking purposes, but current detection methods lack in accuracy, efficiency, and generalizability. The growing availability of electronic health records (EHR) and the development of natural language processing techniques for encoding narrative data offer an opportunity to develop potentially better methods. The purpose of this study is to determine the accuracy and generalizability of using automated methods for detecting three high-incidence and high-impact AEs from EHR data: a) hospital-acquired pneumonia, b) ventilator-associated event and, c) central line-associated bloodstream infection. This validation study will be conducted among medical, surgical and ICU patients admitted between 2013 and 2016 to the Centre hospitalier universitaire de Sherbrooke (CHUS) and the McGill University Health Centre (MUHC), which has both French and English sites. A random 60% sample of CHUS patients will be used for model development purposes (cohort 1, development set). Using a random sample of these patients, a reference standard assessment of their medical chart will be performed. Multivariate logistic regression and the area under the curve (AUC) will be employed to iteratively develop and optimize three automated AE detection models (i.e., one per AE of interest) using EHR data from the CHUS. These models will then be validated on a random sample of the remaining 40% of CHUS patients (cohort 1, internal validation set) using chart review to assess accuracy. The most accurate models developed and validated at the CHUS will then be applied to EHR data from a random sample of patients admitted to the MUHC French site (cohort 2) and English site (cohort 3)-a critical requirement given the use of narrative data -, and accuracy will be assessed using chart review. Generalizability will be determined by comparing AUCs from cohorts 2 and 3 to those from cohort 1. This study will likely produce more accurate and efficient measures of AEs. These measures could be used to assess the incidence rates of AEs, evaluate the success of preventive interventions, or benchmark performance across hospitals.
NASA Astrophysics Data System (ADS)
Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.
2014-11-01
This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.
A Critical Review of Some Qualitative Research Methods Used to Explore Rater Cognition
ERIC Educational Resources Information Center
Suto, Irenka
2012-01-01
Internationally, many assessment systems rely predominantly on human raters to score examinations. Arguably, this facilitates the assessment of multiple sophisticated educational constructs, strengthening assessment validity. It can introduce subjectivity into the scoring process, however, engendering threats to accuracy. The present objectives…
NASA Astrophysics Data System (ADS)
Busto, S.; Ferrín, J. L.; Toro, E. F.; Vázquez-Cendón, M. E.
2018-01-01
In this paper the projection hybrid FV/FE method presented in [1] is extended to account for species transport equations. Furthermore, turbulent regimes are also considered thanks to the k-ε model. Regarding the transport diffusion stage new schemes of high order of accuracy are developed. The CVC Kolgan-type scheme and ADER methodology are extended to 3D. The latter is modified in order to profit from the dual mesh employed by the projection algorithm and the derivatives involved in the diffusion term are discretized using a Galerkin approach. The accuracy and stability analysis of the new method are carried out for the advection-diffusion-reaction equation. Within the projection stage the pressure correction is computed by a piecewise linear finite element method. Numerical results are presented, aimed at verifying the formal order of accuracy of the scheme and to assess the performance of the method on several realistic test problems.
Testing a simple field method for assessing nitrate removal in riparian zones
Philippe Vidon; Michael G. Dosskey
2008-01-01
Being able to identify riparian sites that function better for nitrate removal from groundwater is critical to using efficiently the riparian zones for water quality management. For this purpose, managers need a method that is quick, inexpensive, and accurate enough to enable effective management decisions. This study assesses the precision and accuracy of a simple...
Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2014-10-09
Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2015-01-01
Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their full potential in capturing clinical outcomes. PMID:25811838
Web Service for Positional Quality Assessment: the Wps Tier
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2015-08-01
In the field of spatial data every day we have more and more information available, but we still have little or very little information about the quality of spatial data. We consider that the automation of the spatial data quality assessment is a true need for the geomatic sector, and that automation is possible by means of web processing services (WPS), and the application of specific assessment procedures. In this paper we propose and develop a WPS tier centered on the automation of the positional quality assessment. An experiment using the NSSDA positional accuracy method is presented. The experiment involves the uploading by the client of two datasets (reference and evaluation data). The processing is to determine homologous pairs of points (by distance) and calculate the value of positional accuracy under the NSSDA standard. The process generates a small report that is sent to the client. From our experiment, we reached some conclusions on the advantages and disadvantages of WPSs when applied to the automation of spatial data accuracy assessments.
NASA Astrophysics Data System (ADS)
Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie
2018-04-01
The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.
Tseng, Dorine S J; van Santvoort, Hjalmar C; Fegrachi, Samira; Besselink, Marc G; Zuithoff, Nicolaas P A; Borel Rinkes, Inne H; van Leeuwen, Maarten S; Molenaar, I Quintus
2014-12-01
Computed tomography (CT) is the most widely used method to assess resectability of pancreatic and peri-ampullary cancer. One of the contra-indications for curative resection is the presence of extra-regional lymph node metastases. This meta-analysis investigates the accuracy of CT in assessing extra-regional lymph node metastases in pancreatic and peri-ampullary cancer. We systematically reviewed the literature according to the PRISMA guidelines. Studies reporting on CT assessment of extra-regional lymph nodes in patients undergoing pancreatoduodenectomy were included. Data on baseline characteristics, CT-investigations and histopathological outcomes were extracted. Diagnostic accuracy, positive predictive value (PPV), negative predictive value (NPV), sensitivity and specificity were calculated for individual studies and pooled data. After screening, 4 cohort studies reporting on CT-findings and histopathological outcome in 157 patients with pancreatic or peri-ampullary cancer were included. Overall, diagnostic accuracy, specificity and NPV varied from 63 to 81, 80-100% and 67-90% respectively. However, PPV and sensitivity ranged from 0 to 100% and 0-38%. Pooled sensitivity, specificity, PPV and NPV were 25%, 86%, 28% and 84% respectively. CT has a low diagnostic accuracy in assessing extra-regional lymph node metastases in pancreatic and peri-ampullary cancer. Therefore, suspicion of extra-regional lymph node metastases on CT alone should not be considered a contra-indication for exploration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Influence of neighbourhood information on 'Local Climate Zone' mapping in heterogeneous cities
NASA Astrophysics Data System (ADS)
Verdonck, Marie-Leen; Okujeni, Akpona; van der Linden, Sebastian; Demuzere, Matthias; De Wulf, Robert; Van Coillie, Frieke
2017-10-01
Local climate zone (LCZ) mapping is an emerging field in urban climate research. LCZs potentially provide an objective framework to assess urban form and function worldwide. The scheme is currently being used to globally map LCZs as a part of the World Urban Database and Access Portal Tools (WUDAPT) initiative. So far, most of the LCZ maps lack proper quantitative assessment, challenging the generic character of the WUDAPT workflow. Using the standard method introduced by the WUDAPT community difficulties arose concerning the built zones due to high levels of heterogeneity. To overcome this problem a contextual classifier is adopted in the mapping process. This paper quantitatively assesses the influence of neighbourhood information on the LCZ mapping result of three cities in Belgium: Antwerp, Brussels and Ghent. Overall accuracies for the maps were respectively 85.7 ± 0.5, 79.6 ± 0.9, 90.2 ± 0.4%. The approach presented here results in overall accuracies of 93.6 ± 0.2, 92.6 ± 0.3 and 95.6 ± 0.3% for Antwerp, Brussels and Ghent. The results thus indicate a positive influence of neighbourhood information for all study areas with an increase in overall accuracies of 7.9, 13.0 and 5.4%. This paper reaches two main conclusions. Firstly, evidence was introduced on the relevance of a quantitative accuracy assessment in LCZ mapping, showing that the accuracies reported in previous papers are not easily achieved. Secondly, the method presented in this paper proves to be highly effective in Belgian cities, and given its open character shows promise for application in other heterogeneous cities worldwide.
Validation of Body Volume Acquisition by Using Elliptical Zone Method.
Chiu, C-Y; Pease, D L; Fawkner, S; Sanders, R H
2016-12-01
The elliptical zone method (E-Zone) can be used to obtain reliable body volume data including total body volume and segmental volumes with inexpensive and portable equipment. The purpose of this research was to assess the accuracy of body volume data obtained from E-Zone by comparing them with those acquired from the 3D photonic scanning method (3DPS). 17 male participants with diverse somatotypes were recruited. Each participant was scanned twice on the same day by a 3D whole-body scanner and photographed twice for the E-Zone analysis. The body volume data acquired from 3DPS was regarded as the reference against which the accuracy of the E-Zone was assessed. The relative technical error of measurement (TEM) of total body volume estimations was around 3% for E-Zone. E-Zone can estimate the segmental volumes of upper torso, lower torso, thigh, shank, upper arm and lower arm accurately (relative TEM<10%) but the accuracy for small segments including the neck, hand and foot were poor. In summary, E-Zone provides a reliable, inexpensive, portable, and simple method to obtain reasonable estimates of total body volume and to indicate segmental volume distribution. © Georg Thieme Verlag KG Stuttgart · New York.
Methodological considerations and future insights for 24-hour dietary recall assessment in children.
Foster, Emma; Bradley, Jennifer
2018-03-01
Dietary assessment has come under much criticism of late to the extent that it has been questioned whether self-reported methods of dietary assessment are worth doing at all. Widespread under-reporting of energy intake, limitations due to memory, changes to intake due to the burden of recording and social desirability bias all impact significantly on the accuracy of the dietary information collected. Under-reporting of energy intakes has long been recognized as a problem in dietary research with doubly labeled water measures of energy expenditure uncovering significant under-reporting of energy intakes across different populations and different dietary assessment methods. In this review we focus on dietary assessment with children with particular attention on the 24-hour dietary recall method. We look at the level of under-reporting of energy intakes and how this tends to change with age, gender and body mass index. We discuss potential alternatives to self-reported (or proxy-reported) dietary assessment methods with children, such as biomarkers, and how these do not enable the collection of information important to public health nutrition such as the cooking method, the mixture of foods eaten together or the context in which the food is consumed. We conclude that despite all of the challenges and flaws, the data collected using self-reported dietary assessment methods are extremely valuable. Research into dietary assessment methodology has resulted in significant increases in our understanding of the limitations of self-reported methods and progressive improvements in the accuracy of the data collected. Hence, future investment in dietary surveillance and in improving self-reported methods of intake can make vital contributions to our understanding of dietary intakes and are thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.
The assessment of accuracy of inner shapes manufactured by FDM
NASA Astrophysics Data System (ADS)
Gapiński, Bartosz; Wieczorowski, Michał; Båk, Agata; Domínguez, Alejandro Pereira; Mathia, Thomas
2018-05-01
3D printing created a totally new manufacturing possibilities. It is possible e.g. to produce closed inner shapes with different geometrical features. Unfortunately traditional methods are not suitable to verify the manufacturing accuracy, because it would be necessary to cut workpieces. In the paper the possibilities of computed tomography (x-ray micro-CT) application for accuracy assessment of inner shapes are presented. This was already reported in some papers. For research works hollow cylindrical samples with 20mm diameter and 300mm length were manufactured by means of FDM. A sphere, cone and cube were put inside these elements. All measurements were made with the application of CT. The measurement results enable us to obtain a full geometrical image of both inner and outer surfaces of a cylinder as well as shapes of inner elements. Additionally, it is possible to inspect the structure of a printed element - size and location of supporting net and all the other supporting elements necessary to hold up the walls created over empty spaces. The results obtained with this method were compared with CAD models which were a source of data for 3D printing. This in turn made it possible to assess the manufacturing accuracy of particular figures inserted into the cylinders. The influence of location of the inner supporting walls on a shape deformation was also investigated. The results obtained with this way show us how important CT can be during the assessment of 3D printing of objects.
Soydan, Lydia C.; Kellihan, Heidi B.; Bates, Melissa L.; Stepien, Rebecca L.; Consigny, Daniel W.; Bellofiore, Alessandro; Francois, Christopher J.; Chesler, Naomi C.
2015-01-01
Objectives To compare noninvasive estimates of pulmonary artery pressure (PAP) obtained via echocardiography (ECHO) to invasive measurements of PAP obtained during right heart catheterization (RHC) across a wide range of PAP, to examine the accuracy of estimating right atrial pressure via ECHO (RAPECHO) compared to RAP measured by catheterization (RAPRHC), and to determine if adding RAPECHO improves the accuracy of noninvasive PAP estimations. Animals Fourteen healthy female beagle dogs. Methods ECHO and RHC performed at various data collection points, both at normal PAP and increased PAP (generated by microbead embolization). Results Noninvasive estimates of PAP were moderately but significantly correlated with invasive measurements of PAP. A high degree of variance was noted for all estimations, with increased variance at higher PAP. The addition of RAPECHO improved correlation and bias in all cases. RAPRHC was significantly correlated with RAPECHO and with subjectively assessed right atrial size (RA sizesubj). Conclusions Spectral Doppler assessments of tricuspid and pulmonic regurgitation are imperfect methods for predicting PAP as measured by catheterization despite an overall moderate correlation between invasive and noninvasive values. Noninvasive measurements may be better utilized as part of a comprehensive assessment of PAP in canine patients. RAPRHC appears best estimated based on subjective assessment of RA size. Including estimated RAPECHO in estimates of PAP improves the correlation and relatedness between noninvasive and invasive measures of PAP, but notable variability in accuracy of estimations persists. PMID:25601540
ERIC Educational Resources Information Center
Vivo, Juana-Maria; Franco, Manuel
2008-01-01
This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…
Damage detection of structures with detrended fluctuation and detrended cross-correlation analyses
NASA Astrophysics Data System (ADS)
Lin, Tzu-Kang; Fajri, Haikal
2017-03-01
Recently, fractal analysis has shown its potential for damage detection and assessment in fields such as biomedical and mechanical engineering. For its practicability in interpreting irregular, complex, and disordered phenomena, a structural health monitoring (SHM) system based on detrended fluctuation analysis (DFA) and detrended cross-correlation analysis (DCCA) is proposed. First, damage conditions can be swiftly detected by evaluating ambient vibration signals measured from a structure through DFA. Damage locations can then be determined by analyzing the cross correlation of signals of different floors by applying DCCA. A damage index is also proposed based on multi-scale DCCA curves to improve the damage location accuracy. To verify the performance of the proposed SHM system, a four-story numerical model was used to simulate various damage conditions with different noise levels. Furthermore, an experimental verification was conducted on a seven-story benchmark structure to assess the potential damage. The results revealed that the DFA method could detect the damage conditions satisfactorily, and damage locations can be identified through the DCCA method with an accuracy of 75%. Moreover, damage locations can be correctly assessed by the damage index method with an improved accuracy of 87.5%. The proposed SHM system has promising application in practical implementations.
NASA Astrophysics Data System (ADS)
Shepson, P. B.; Lavoie, T. N.; Kerlo, A. E.; Stirm, B. H.
2016-12-01
Understanding the contribution of anthropogenic activities to atmospheric greenhouse gas concentrations requires an accurate characterization of emission sources. Previously, we have reported the use of a novel aircraft-based mass balance measurement technique to quantify greenhouse gas emission rates from point and area sources, however, the accuracy of this approach has not been evaluated to date. Here, an assessment of method accuracy and precision was performed by conducting a series of six aircraft-based mass balance experiments at a power plant in southern Indiana and comparing the calculated CO2 emission rates to the reported hourly emission measurements made by continuous emissions monitoring systems (CEMS) installed directly in the exhaust stacks at the facility. For all flights, CO2 emissions were quantified before CEMS data were released online to ensure unbiased analysis. Additionally, we assess the uncertainties introduced to the final emission rate caused by our analysis method, which employs a statistical kriging model to interpolate and extrapolate the CO2 fluxes across the flight transects from the ground to the top of the boundary layer. Subsequently, using the results from these flights combined with the known emissions reported by the CEMS, we perform an inter-model comparison of alternative kriging methods to evaluate the performance of the kriging approach.
Developing collaborative classifiers using an expert-based model
Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan
2009-01-01
This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.
A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter
NASA Technical Reports Server (NTRS)
Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe
1996-01-01
Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.
[Navigation in implantology: Accuracy assessment regarding the literature].
Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József
2016-06-01
Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.
Validity and consistency assessment of accident analysis methods in the petroleum industry.
Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza
2017-11-17
Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-06-07
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-01-01
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. PMID:27273294
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidsmeier, T.; Koehl, R.; Lanham, R.
2008-07-15
The current design and fabrication process for RERTR fuel plates utilizes film radiography during the nondestructive testing and characterization. Digital radiographic methods offer a potential increases in efficiency and accuracy. The traditional and digital radiographic methods are described and demonstrated on a fuel plate constructed with and average of 51% by volume fuel using the dispersion method. Fuel loading data from each method is analyzed and compared to a third baseline method to assess accuracy. The new digital method is shown to be more accurate, save hours of work, and provide additional information not easily available in the traditional method.more » Additional possible improvements suggested by the new digital method are also raised. (author)« less
Gimenez, Thais; Braga, Mariana Minatel; Raggio, Daniela Procida; Deery, Chris; Ricketts, David N; Mendes, Fausto Medeiros
2013-01-01
Fluorescence-based methods have been proposed to aid caries lesion detection. Summarizing and analysing findings of studies about fluorescence-based methods could clarify their real benefits. We aimed to perform a comprehensive systematic review and meta-analysis to evaluate the accuracy of fluorescence-based methods in detecting caries lesions. Two independent reviewers searched PubMed, Embase and Scopus through June 2012 to identify papers/articles published. Other sources were checked to identify non-published literature. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS AND DIAGNOSTIC METHODS: The eligibility criteria were studies that: (1) have assessed the accuracy of fluorescence-based methods of detecting caries lesions on occlusal, approximal or smooth surfaces, in both primary or permanent human teeth, in the laboratory or clinical setting; (2) have used a reference standard; and (3) have reported sufficient data relating to the sample size and the accuracy of methods. A diagnostic 2×2 table was extracted from included studies to calculate the pooled sensitivity, specificity and overall accuracy parameters (Diagnostic Odds Ratio and Summary Receiver-Operating curve). The analyses were performed separately for each method and different characteristics of the studies. The quality of the studies and heterogeneity were also evaluated. Seventy five studies met the inclusion criteria from the 434 articles initially identified. The search of the grey or non-published literature did not identify any further studies. In general, the analysis demonstrated that the fluorescence-based method tend to have similar accuracy for all types of teeth, dental surfaces or settings. There was a trend of better performance of fluorescence methods in detecting more advanced caries lesions. We also observed moderate to high heterogeneity and evidenced publication bias. Fluorescence-based devices have similar overall performance; however, better accuracy in detecting more advanced caries lesions has been observed.
Delatour, Vincent; Lalere, Beatrice; Saint-Albin, Karène; Peignaux, Maryline; Hattchouel, Jean-Marc; Dumont, Gilles; De Graeve, Jacques; Vaslin-Reimann, Sophie; Gillery, Philippe
2012-11-20
The reliability of biological tests is a major issue for patient care in terms of public health that involves high economic stakes. Reference methods, as well as regular external quality assessment schemes (EQAS), are needed to monitor the analytical performance of field methods. However, control material commutability is a major concern to assess method accuracy. To overcome material non-commutability, we investigated the possibility of using lyophilized serum samples together with a limited number of frozen serum samples to assign matrix-corrected target values, taking the example of glucose assays. Trueness of the current glucose assays was first measured against a primary reference method by using human frozen sera. Methods using hexokinase and glucose oxidase with spectroreflectometric detection proved very accurate, with bias ranging between -2.2% and +2.3%. Bias of methods using glucose oxidase with spectrophotometric detection was +4.5%. Matrix-related bias of the lyophilized materials was then determined and ranged from +2.5% to -14.4%. Matrix-corrected target values were assigned and used to assess trueness of 22 sub-peer groups. We demonstrated that matrix-corrected target values can be a valuable tool to assess field method accuracy in large scale surveys where commutable materials are not available in sufficient amount with acceptable costs. Copyright © 2012 Elsevier B.V. All rights reserved.
Lee, Hoe C.; Yanting Chee, Derserri; Selander, Helena; Falkmer, Torbjorn
2012-01-01
Background Current methods of determining licence retainment or cancellation is through on-road driving tests. Previous research has shown that occupational therapists frequently assess drivers’ visual attention while sitting in the back seat on the opposite side of the driver. Since the eyes of the driver are not always visible, assessment by eye contact becomes problematic. Such procedural drawbacks may challenge validity and reliability of the visual attention assessments. In terms of correctly classified attention, the aim of the study was to establish the accuracy and the inter-rater reliability of driving assessments of visual attention from the back seat. Furthermore, by establishing eye contact between the assessor and the driver through an additional mirror on the wind screen, the present study aimed to establish how much such an intervention would enhance the accuracy of the visual attention assessment. Methods Two drivers with Parkinson's disease (PD) and six control drivers drove a fixed route in a driving simulator while wearing a head mounted eye tracker. The eye tracker data showed where the foveal visual attention actually was directed. These data were time stamped and compared with the simultaneous manual scoring of the visual attention of the drivers. In four of the drivers, one with Parkinson's disease, a mirror on the windscreen was set up to arrange for eye contact between the driver and the assessor. Inter-rater reliability was performed with one of the Parkinson drivers driving, but without the mirror. Results Without mirror, the overall accuracy was 56% when assessing the three control drivers and with mirror 83%. However, for the PD driver without mirror the accuracy was 94%, whereas for the PD driver with a mirror the accuracy was 90%. With respect to the inter-rater reliability, a 73% agreement was found. Conclusion If the final outcome of a driving assessment is dependent on the subcategory of a protocol assessing visual attention, we suggest the use of an additional mirror to establish eye contact between the assessor and the driver. The clinicians’ observations on-road should not be a standalone assessment in driving assessments. Instead, eye trackers should be employed for further analyses and correlation in cases where there is doubt about a driver's attention. PMID:22461850
Yatake, Hidetoshi; Sawai, Yuka; Nishi, Toshio; Nakano, Yoshiaki; Nishimae, Ayaka; Katsuda, Toshizo; Yabunaka, Koichi; Takeda, Yoshihiro; Inaji, Hideo
2017-07-01
The objective of the study was to compare direct measurement with a conventional method for evaluation of clip placement in stereotactic vacuum-assisted breast biopsy (ST-VAB) and to evaluate the accuracy of clip placement using the direct method. Accuracy of clip placement was assessed by measuring the distance from a residual calcification of a targeted calcification clustered to a clip on a mammogram after ST-VAB. Distances in the craniocaudal (CC) and mediolateral oblique (MLO) views were measured in 28 subjects with mammograms recorded twice or more after ST-VAB. The difference in the distance between the first and second measurements was defined as the reproducibility and was compared with that from a conventional method using a mask system with overlap of transparent film on the mammogram. The 3D clip-to-calcification distance was measured using the direct method in 71 subjects. The reproducibility of the direct method was higher than that of the conventional method in CC and MLO views (P = 0.002, P < 0.001). The median 3D clip-to-calcification distance was 2.8 mm, with an interquartile range of 2.0-4.8 mm and a range of 1.1-36.3 mm. The direct method used in this study was more accurate than the conventional method, and gave a median 3D distance of 2.8 mm between the calcification and clip.
Kielar, Maciej
2016-01-01
Aim The purpose of the study was to improve the ultrasonographic assessment of the anterior cruciate ligament by an inclusion of a dynamic element. The proposed functional modification aims to restore normal posterior cruciate ligament tension, which is associated with a visible change in the ligament shape. This method reduces the risk of an error resulting from subjectively assessing the shape of the posterior cruciate ligament. It should be also emphasized that the method combined with other ultrasound anterior cruciate ligament assessment techniques helps increase diagnostic accuracy. Methods Ultrasonography is used as an adjunctive technique in the diagnosis of anterior cruciate ligament injury. The paper presents a sonographic technique for the assessment of suspected anterior cruciate ligament insufficiency supplemented by the use of a dynamic examination. This technique can be recommended as an additional procedure in routine ultrasound diagnostics of anterior cruciate ligament injuries. Results Supplementing routine ultrasonography with the dynamic assessment of posterior cruciate ligament shape changes in patients with suspected anterior cruciate ligament injury reduces the risk of subjective errors and increases diagnostic accuracy. This is important especially in cases of minor anterior knee instability and bilateral anterior knee instability. Conclusions An assessment of changes in posterior cruciate ligament using a dynamic ultrasound examination effectively complements routine sonographic diagnostic techniques for anterior cruciate ligament insufficiency. PMID:27679732
Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio
2007-12-01
Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.
A Ranking Approach to Genomic Selection.
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.
2014-04-01
Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
Tang, Yongsheng; Ren, Zhongdao
2017-01-01
The neutral axis position (NAP) is a key parameter of a flexural member for structure design and safety evaluation. The accuracy of NAP measurement based on traditional methods does not satisfy the demands of structural performance assessment especially under live traffic loads. In this paper, a new method to determine NAP is developed by using modal macro-strain (MMS). In the proposed method, macro-strain is first measured with long-gauge Fiber Bragg Grating (FBG) sensors; then the MMS is generated from the measured macro-strain with Fourier transform; and finally the neutral axis position coefficient (NAPC) is determined from the MMS and the neutral axis depth is calculated with NAPC. To verify the effectiveness of the proposed method, some experiments on FE models, steel beam and reinforced concrete (RC) beam were conducted. From the results, the plane section was first verified with MMS of the first bending mode. Then the results confirmed the high accuracy and stability for assessing NAP. The results also proved that the NAPC was a good indicator of local damage. In summary, with the proposed method, accurate assessment of flexural structures can be facilitated. PMID:28230747
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
Tang, Yongsheng; Ren, Zhongdao
2017-02-20
The neutral axis position (NAP) is a key parameter of a flexural member for structure design and safety evaluation. The accuracy of NAP measurement based on traditional methods does not satisfy the demands of structural performance assessment especially under live traffic loads. In this paper, a new method to determine NAP is developed by using modal macro-strain (MMS). In the proposed method, macro-strain is first measured with long-gauge Fiber Bragg Grating (FBG) sensors; then the MMS is generated from the measured macro-strain with Fourier transform; and finally the neutral axis position coefficient (NAPC) is determined from the MMS and the neutral axis depth is calculated with NAPC. To verify the effectiveness of the proposed method, some experiments on FE models, steel beam and reinforced concrete (RC) beam were conducted. From the results, the plane section was first verified with MMS of the first bending mode. Then the results confirmed the high accuracy and stability for assessing NAP. The results also proved that the NAPC was a good indicator of local damage. In summary, with the proposed method, accurate assessment of flexural structures can be facilitated.
NASA Astrophysics Data System (ADS)
Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun
2014-03-01
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
ERIC Educational Resources Information Center
Raczynski, Kevin R.; Cohen, Allan S.; Engelhard, George, Jr.; Lu, Zhenqiu
2015-01-01
There is a large body of research on the effectiveness of rater training methods in the industrial and organizational psychology literature. Less has been reported in the measurement literature on large-scale writing assessments. This study compared the effectiveness of two widely used rater training methods--self-paced and collaborative…
Methods for assessment of keel bone damage in poultry.
Casey-Trott, T; Heerkens, J L T; Petrik, M; Regmi, P; Schrader, L; Toscano, M J; Widowski, T
2015-10-01
Keel bone damage (KBD) is a critical issue facing the laying hen industry today as a result of the likely pain leading to compromised welfare and the potential for reduced productivity. Recent reports suggest that damage, while highly variable and likely dependent on a host of factors, extends to all systems (including battery cages, furnished cages, and non-cage systems), genetic lines, and management styles. Despite the extent of the problem, the research community remains uncertain as to the causes and influencing factors of KBD. Although progress has been made investigating these factors, the overall effort is hindered by several issues related to the assessment of KBD, including quality and variation in the methods used between research groups. These issues prevent effective comparison of studies, as well as difficulties in identifying the presence of damage leading to poor accuracy and reliability. The current manuscript seeks to resolve these issues by offering precise definitions for types of KBD, reviewing methods for assessment, and providing recommendations that can improve the accuracy and reliability of those assessments. © 2015 Poultry Science Association Inc.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Google Earth elevation data extraction and accuracy assessment for transportation applications
Wang, Yinsong; Zou, Yajie; Henrickson, Kristian; Wang, Yinhai; Tang, Jinjun; Park, Byung-Jung
2017-01-01
Roadway elevation data is critical for a variety of transportation analyses. However, it has been challenging to obtain such data and most roadway GIS databases do not have them. This paper intends to address this need by proposing a method to extract roadway elevation data from Google Earth (GE) for transportation applications. A comprehensive accuracy assessment of the GE-extracted elevation data is conducted for the area of conterminous USA. The GE elevation data was compared with the ground truth data from nationwide GPS benchmarks and roadway monuments from six states in the conterminous USA. This study also compares the GE elevation data with the elevation raster data from the U.S. Geological Survey National Elevation Dataset (USGS NED), which is a widely used data source for extracting roadway elevation. Mean absolute error (MAE) and root mean squared error (RMSE) are used to assess the accuracy and the test results show MAE, RMSE and standard deviation of GE roadway elevation error are 1.32 meters, 2.27 meters and 2.27 meters, respectively. Finally, the proposed extraction method was implemented and validated for the following three scenarios: (1) extracting roadway elevation differentiating by directions, (2) multi-layered roadway recognition in freeway segment and (3) slope segmentation and grade calculation in freeway segment. The methodology validation results indicate that the proposed extraction method can locate the extracting route accurately, recognize multi-layered roadway section, and segment the extracted route by grade automatically. Overall, it is found that the high accuracy elevation data available from GE provide a reliable data source for various transportation applications. PMID:28445480
Google Earth elevation data extraction and accuracy assessment for transportation applications.
Wang, Yinsong; Zou, Yajie; Henrickson, Kristian; Wang, Yinhai; Tang, Jinjun; Park, Byung-Jung
2017-01-01
Roadway elevation data is critical for a variety of transportation analyses. However, it has been challenging to obtain such data and most roadway GIS databases do not have them. This paper intends to address this need by proposing a method to extract roadway elevation data from Google Earth (GE) for transportation applications. A comprehensive accuracy assessment of the GE-extracted elevation data is conducted for the area of conterminous USA. The GE elevation data was compared with the ground truth data from nationwide GPS benchmarks and roadway monuments from six states in the conterminous USA. This study also compares the GE elevation data with the elevation raster data from the U.S. Geological Survey National Elevation Dataset (USGS NED), which is a widely used data source for extracting roadway elevation. Mean absolute error (MAE) and root mean squared error (RMSE) are used to assess the accuracy and the test results show MAE, RMSE and standard deviation of GE roadway elevation error are 1.32 meters, 2.27 meters and 2.27 meters, respectively. Finally, the proposed extraction method was implemented and validated for the following three scenarios: (1) extracting roadway elevation differentiating by directions, (2) multi-layered roadway recognition in freeway segment and (3) slope segmentation and grade calculation in freeway segment. The methodology validation results indicate that the proposed extraction method can locate the extracting route accurately, recognize multi-layered roadway section, and segment the extracted route by grade automatically. Overall, it is found that the high accuracy elevation data available from GE provide a reliable data source for various transportation applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacaze, Guilhem; Oefelein, Joseph
Large-eddy-simulation (LES) is quickly becoming a method of choice for studying complex thermo-physics in a wide range of propulsion and power systems. It provides a means to study coupled turbulent combustion and flow processes in parameter spaces that are unattainable using direct-numerical-simulation (DNS), with a degree of fidelity that can be far more accurate than conventional engineering methods such as the Reynolds-averaged Navier-Stokes (RANS) approx- imation. However, development of predictive LES is complicated by the complex interdependence of different type of errors coming from numerical methods, algorithms, models and boundary con- ditions. On the other hand, control of accuracy hasmore » become a critical aspect in the development of predictive LES for design. The objective of this project is to create a framework of metrics aimed at quantifying the quality and accuracy of state-of-the-art LES in a manner that addresses the myriad of competing interdependencies. In a typical simulation cycle, only 20% of the computational time is actually usable. The rest is spent in case preparation, assessment, and validation, because of the lack of guidelines. This work increases confidence in the accuracy of a given solution while min- imizing the time obtaining the solution. The approach facilitates control of the tradeoffs between cost, accuracy, and uncertainties as a function of fidelity and methods employed. The analysis is coupled with advanced Uncertainty Quantification techniques employed to estimate confidence in model predictions and calibrate model's parameters. This work has provided positive conse- quences on the accuracy of the results delivered by LES and will soon have a broad impact on research supported both by the DOE and elsewhere.« less
Accuracy of a continuous noninvasive hemoglobin monitor in intensive care unit patients.
Frasca, Denis; Dahyot-Fizelier, Claire; Catherine, Karen; Levrat, Quentin; Debaene, Bertrand; Mimoz, Olivier
2011-10-01
To determine whether noninvasive hemoglobin measurement by Pulse CO-Oximetry could provide clinically acceptable absolute and trend accuracy in critically ill patients, compared to other invasive methods of hemoglobin assessment available at bedside and the gold standard, the laboratory analyzer. Prospective study. Surgical intensive care unit of a university teaching hospital. Sixty-two patients continuously monitored with Pulse CO-Oximetry (Masimo Radical-7). None. Four hundred seventy-one blood samples were analyzed by a point-of-care device (HemoCue 301), a satellite lab CO-Oximeter (Siemens RapidPoint 405), and a laboratory hematology analyzer (Sysmex XT-2000i), which was considered the reference device. Hemoglobin values reported from the invasive methods were compared to the values reported by the Pulse CO-Oximeter at the time of blood draw. When the case-to-case variation was assessed, the bias and limits of agreement were 0.0±1.0 g/dL for the Pulse CO-Oximeter, 0.3±1.3g/dL for the point-of-care device, and 0.9±0.6 g/dL for the satellite lab CO-Oximeter compared to the reference method. Pulse CO-Oximetry showed similar trend accuracy as satellite lab CO-Oximetry, whereas the point-of-care device did not appear to follow the trend of the laboratory analyzer as well as the other test devices. When compared to laboratory reference values, hemoglobin measurement with Pulse CO-Oximetry has absolute accuracy and trending accuracy similar to widely used, invasive methods of hemoglobin measurement at bedside. Hemoglobin measurement with pulse CO-Oximetry has the additional advantages of providing continuous measurements, noninvasively, which may facilitate hemoglobin monitoring in the intensive care unit.
The 'ABC' of examining foot radiographs.
Pearse, Eyiyemi O; Klass, Benjamin; Bendall, Stephen P
2005-11-01
We report a simple systematic method of assessing foot radiographs that improves diagnostic accuracy and can reduce the incidence of inappropriate management of serious forefoot and midfoot injuries, particularly the Lisfranc-type injury. Five recently appointed senior house officers (SHOs), with no casualty or Orthopaedic experience prior to their appointment, were shown a set of 10 foot radiographs and told the history and examination findings recorded in the casualty notes of each patient within 6 weeks of taking up their posts. They were informed that the radiographs might or might not demonstrate an abnormality. They were asked to make a diagnosis and decide on a management plan. The test was repeated after they were taught the 'ABC' method of evaluating foot radiographs. Diagnostic accuracy improved after SHOs were taught a systematic method of assessing foot radiographs. The proportion of correct diagnoses increased from 0.64 to 0.78 and the probability of recognising Lisfranc injuries increased from 0 to 0.6. The use of this simple method of assessing foot radiographs can reduce the incidence of inappropriate management of serious foot injuries by casualty SHOs, in particular the Lisfranc type injury.
A novel approach for individual tree crown delineation using lidar data
NASA Astrophysics Data System (ADS)
Liu, Tao
Individual tree crown delineation (ITCD) is an important technique to support precision forestry. ITCD is particularly difficult for deciduous forests where the existence of multiple branches can lead to false tree top detection. This thesis focused on developing a new ITCD model, which consists of two components: (1) boundary refinement using a novel algorithm called Fishing Net Dragging (FiND), and (2) segment merging using boundary classification. The proposed ITCD model was tested in both deciduous and mixed forests, attaining an overall accuracy of 74% and 78%, respectively. This compared favorably to an ITCD method commonly cited in the literature, which attained 41% and 51% on the same plots. To facilitate comparison of research in the ITCD community, this thesis also developed a new accuracy assessment scheme for ITCD. This new accuracy assessment is easy to interpret and convenient to implement while comprehensively evaluating ITCD accuracy.
Applying high resolution remote sensing image and DEM to falling boulder hazard assessment
NASA Astrophysics Data System (ADS)
Huang, Changqing; Shi, Wenzhong; Ng, K. C.
2005-10-01
Boulder fall hazard assessing generally requires gaining the boulder information. The extensive mapping and surveying fieldwork is a time-consuming, laborious and dangerous conventional method. So this paper proposes an applying image processing technology to extract boulder and assess boulder fall hazard from high resolution remote sensing image. The method can replace the conventional method and extract the boulder information in high accuracy, include boulder size, shape, height and the slope and aspect of its position. With above boulder information, it can be satisfied for assessing, prevention and cure boulder fall hazard.
Applications of Principled Search Methods in Climate Influences and Mechanisms
NASA Technical Reports Server (NTRS)
Glymour, Clark
2005-01-01
Forest and grass fires cause economic losses in the billions of dollars in the U.S. alone. In addition, boreal forests constitute a large carbon store; it has been estimated that, were no burning to occur, an additional 7 gigatons of carbon would be sequestered in boreal soils each century. Effective wildfire suppression requires anticipation of locales and times for which wildfire is most probable, preferably with a two to four week forecast, so that limited resources can be efficiently deployed. The United States Forest Service (USFS), and other experts and agencies have developed several measures of fire risk combining physical principles and expert judgment, and have used them in automated procedures for forecasting fire risk. Forecasting accuracies for some fire risk indices in combination with climate and other variables have been estimated for specific locations, with the value of fire risk index variables assessed by their statistical significance in regressions. In other cases, the MAPSS forecasts [23, 241 for example, forecasting accuracy has been estimated only by simulated data. We describe alternative forecasting methods that predict fire probability by locale and time using statistical or machine learning procedures trained on historical data, and we give comparative assessments of their forecasting accuracy for one fire season year, April- October, 2003, for all U.S. Forest Service lands. Aside from providing an accuracy baseline for other forecasting methods, the results illustrate the interdependence between the statistical significance of prediction variables and the forecasting method used.
DOT National Transportation Integrated Search
1988-01-01
This report presents an assessment of the accuracy of alternatives available to calculate unabsorbed overhead in construction delay claims submitted by contractors. It reviews the alternatives available, concludes that the Eichleay method, used by ma...
NASA Astrophysics Data System (ADS)
Narita, Y.; Iida, H.; Ebert, S.; Nakamura, T.
1997-12-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for three numerical phantoms for /sup 201/Tl. Data were reconstructed with ordered-subset EM algorithm including noise-less transmission data based attenuation correction. Accuracy of TDCS and TEW scatter corrections were assessed by comparison with simulated true primary data. The uniform cylindrical phantom simulation demonstrated better quantitative accuracy with TDCS than with TEW (-2.0% vs. 16.7%) and better S/N (6.48 vs. 5.05). A uniform ring myocardial phantom simulation demonstrated better homogeneity with TDCS than TEW in the myocardium; i.e., anterior-to-posterior wall count ratios were 0.99 and 0.76 with TDCS and TEW, respectively. For the MCAT phantom, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.
Model-based RSA of a femoral hip stem using surface and geometrical shape models.
Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M
2006-07-01
Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.
Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario
2016-04-01
Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
Validity of Newborn Clinical Assessment to Determine Gestational Age in Bangladesh
Mullany, Luke C.; Ladhani, Karima; Uddin, Jamal; Mitra, Dipak; Ahmed, Parvez; Christian, Parul; Labrique, Alain; DasGupta, Sushil K.; Lokken, R. Peter; Quaiyum, Mohammed; Baqui, Abdullah H
2016-01-01
BACKGROUND: Gestational age (GA) is frequently unknown or inaccurate in pregnancies in low-income countries. Early identification of preterm infants may help link them to potentially life-saving interventions. METHODS: We conducted a validation study in a community-based birth cohort in rural Bangladesh. GA was determined by pregnancy ultrasound (<20 weeks). Community health workers conducted home visits (<72 hours) to assess physical/neuromuscular signs and measure anthropometrics. The distribution, agreement, and diagnostic accuracy of different clinical methods of GA assessment were determined compared with early ultrasound dating. RESULTS: In the live-born cohort (n = 1066), the mean ultrasound GA was 39.1 weeks (SD 2.0) and prevalence of preterm birth (<37 weeks) was 11.4%. Among assessed newborns (n = 710), the mean ultrasound GA was 39.3 weeks (SD 1.6) (8.3% preterm) and by Ballard scoring the mean GA was 38.9 weeks (SD 1.7) (12.9% preterm). The average bias of the Ballard was –0.4 weeks; however, 95% limits of agreement were wide (–4.7 to 4.0 weeks) and the accuracy for identifying preterm infants was low (sensitivity 16%, specificity 87%). Simplified methods for GA assessment had poor diagnostic accuracy for identifying preterm births (community health worker prematurity scorecard [sensitivity/specificity: 70%/27%]; Capurro [5%/96%]; Eregie [75%/58%]; Bhagwat [18%/87%], foot length <75 mm [64%/35%]; birth weight <2500 g [54%/82%]). Neonatal anthropometrics had poor to fair performance for classifying preterm infants (areas under the receiver operating curve 0.52–0.80). CONCLUSIONS: Newborn clinical assessment of GA is challenging at the community level in low-resource settings. Anthropometrics are also inaccurate surrogate markers for GA in settings with high rates of fetal growth restriction. PMID:27313070
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E
2013-07-01
The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.
NASA Astrophysics Data System (ADS)
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Adriaens, E; Willoughby, J A; Meyer, B R; Blakeman, L C; Alépée, N; Fochtman, P; Guest, R; Kandarova, H; Verstraelen, S; Van Rompay, A R
2018-06-01
Assessment of ocular irritancy is an international regulatory requirement in the safety evaluation of industrial and consumer products. Although many in vitro ocular irritation assays exist, alone they are incapable of fully categorizing chemicals. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI consortium was developed to assess the reliability of eight in vitro test methods and establish an optimal tiered-testing strategy. One assay selected was the Short Time Exposure (STE) assay. This assay measures the viability of SIRC rabbit corneal cells after 5min exposure to 5% and 0.05% solutions of test material, and is capable of categorizing of Category 1 and No Category chemicals. The accuracy of the STE test method to identify Cat 1 chemicals was 61.3% with 23.7% sensitivity and 95.2% specificity. If non-soluble chemicals and unqualified results were excluded, the performance to identify Cat 1 chemicals remained similar (accuracy 62.2% with 22.7% sensitivity and 100% specificity). The accuracy of the STE test method to identify No Cat chemicals was 72.5% with 66.2% sensitivity and 100% specificity. Excluding highly volatile chemicals, non-surfactant solids and non-qualified results resulted in an important improvement of the performance of the STE test method (accuracy 96.2% with 81.8% sensitivity and 100% specificity). Furthermore, it seems that solids are more difficult to test in the STE, 71.4% of the solids resulted in unqualified results (solubility issues and/or high variation between independent runs) whereas for liquids 13.2% of the results were not qualified, supporting the restriction of the test method regarding the testing of solids. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Sozer, Emre; Brehm, Christoph; Kiris, Cetin C.
2014-01-01
A survey of gradient reconstruction methods for cell-centered data on unstructured meshes is conducted within the scope of accuracy assessment. Formal order of accuracy, as well as error magnitudes for each of the studied methods, are evaluated on a complex mesh of various cell types through consecutive local scaling of an analytical test function. The tests highlighted several gradient operator choices that can consistently achieve 1st order accuracy regardless of cell type and shape. The tests further offered error comparisons for given cell types, leading to the observation that the "ideal" gradient operator choice is not universal. Practical implications of the results are explored via CFD solutions of a 2D inviscid standing vortex, portraying the discretization error properties. A relatively naive, yet largely unexplored, approach of local curvilinear stencil transformation exhibited surprisingly favorable properties
ERIC Educational Resources Information Center
Lofthouse, Rachael E.; Lindsay, William R.; Totsika, Vasiliki; Hastings, Richard P.; Boer, Douglas P.; Haaven, James L.
2013-01-01
Background: The purpose of the present study was to add to the literature on the predictive accuracy of a dynamic intellectual disability specific risk assessment tool. Method: A dynamic risk assessment for sexual reoffending (ARMIDILO-S), a static risk assessment for sexual offending (STATIC-99), and a static risk assessment for violence…
Iyatomi, Hitoshi; Oka, Hiroshi; Saito, Masataka; Miyake, Ayako; Kimoto, Masayuki; Yamagami, Jun; Kobayashi, Seiichiro; Tanikawa, Akiko; Hagiwara, Masafumi; Ogawa, Koichi; Argenziano, Giuseppe; Soyer, H Peter; Tanaka, Masaru
2006-04-01
The aims of this study were to provide a quantitative assessment of the tumour area extracted by dermatologists and to evaluate computer-based methods from dermoscopy images for refining a computer-based melanoma diagnostic system. Dermoscopic images of 188 Clark naevi, 56 Reed naevi and 75 melanomas were examined. Five dermatologists manually drew the border of each lesion with a tablet computer. The inter-observer variability was evaluated and the standard tumour area (STA) for each dermoscopy image was defined. Manual extractions by 10 non-medical individuals and by two computer-based methods were evaluated with STA-based assessment criteria: precision and recall. Our new computer-based method introduced the region-growing approach in order to yield results close to those obtained by dermatologists. The effectiveness of our extraction method with regard to diagnostic accuracy was evaluated. Two linear classifiers were built using the results of conventional and new computer-based tumour area extraction methods. The final diagnostic accuracy was evaluated by drawing the receiver operating curve (ROC) of each classifier, and the area under each ROC was evaluated. The standard deviations of the tumour area extracted by five dermatologists and 10 non-medical individuals were 8.9% and 10.7%, respectively. After assessment of the extraction results by dermatologists, the STA was defined as the area that was selected by more than two dermatologists. Dermatologists selected the melanoma area with statistically smaller divergence than that of Clark naevus or Reed naevus (P = 0.05). By contrast, non-medical individuals did not show this difference. Our new computer-based extraction algorithm showed superior performance (precision, 94.1%; recall, 95.3%) to the conventional thresholding method (precision, 99.5%; recall, 87.6%). These results indicate that our new algorithm extracted a tumour area close to that obtained by dermatologists and, in particular, the border part of the tumour was adequately extracted. With this refinement, the area under the ROC increased from 0.795 to 0.875 and the diagnostic accuracy showed an increase of approximately 20% in specificity when the sensitivity was 80%. It can be concluded that our computer-based tumour extraction algorithm extracted almost the same area as that obtained by dermatologists and provided improved computer-based diagnostic accuracy.
The MAT-sf: identifying risk for major mobility disability
USDA-ARS?s Scientific Manuscript database
BACKGROUND: The assessment of mobility is essential to both aging research and clinical geriatric practice. A newly developed self-report measure of mobility, the mobility assessment tool-short form (MAT-sf), uses video animations as an innovative method to improve measurement accuracy/precision. Th...
Choi, Jin-Seung; Kang, Dong-Won; Seo, Jeong-Woo; Kim, Dae-Hyeok; Yang, Seung-Tae; Tack, Gye-Rae
2016-01-01
[Purpose] In this study, a program was developed for leg-strengthening exercises and balance assessment using Microsoft Kinect. [Subjects and Methods] The program consists of three leg-strengthening exercises (knee flexion, hip flexion, and hip extension) and the one-leg standing test (OLST). The program recognizes the correct exercise posture by comparison with the range of motion of the hip and knee joints and provides a number of correct action examples to improve training. The program measures the duration of the OLST and presents this as the balance-age. The accuracy of the program was analyzed using the data of five male adults. [Results] In terms of the motion recognition accuracy, the sensitivity and specificity were 95.3% and 100%, respectively. For the balance assessment, the time measured using the existing method with a stopwatch had an absolute error of 0.37 sec. [Conclusion] The developed program can be used to enable users to conduct leg-strengthening exercises and balance assessments at home.
Lu, Zhen; McKellop, Harry A
2014-03-01
This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.
2012-01-01
Background This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses. Knowledge sources can support nurses in deriving diagnoses. A nurse’s disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. Method A randomised factorial design was used in 2008–2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. Results The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse’s age and the reasoning skills of `deduction’ and `analysis’. Conclusions Improving nurses’ dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses. PMID:22852577
Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu
2013-06-01
It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2013-01-01
Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to the conditions of operation. PMID:24260324
New Criteria for Assessing the Accuracy of Blood Glucose Monitors Meeting, October 28, 2011
Walsh, John; Roberts, Ruth; Vigersky, Robert A.; Schwartz, Frank
2012-01-01
Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in post-approval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs’ analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report. PMID:22538160
New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting, October 28, 2011.
Walsh, John; Roberts, Ruth; Vigersky, Robert A; Schwartz, Frank
2012-03-01
Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in postapproval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs' analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report. © 2012 Diabetes Technology Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellens, N; Farahani, K
2015-06-15
Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precisionmore » of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many preclinical applications including focused drug delivery and thermal therapy. Funding support provided by Philips Healthcare.« less
Agarwal, Siddharth; Sethi, Vani; Pandey, Ravindra Mohan; Kondal, Dimple
2008-06-01
We examined the diagnostic accuracy of human touch (HT) method in assessing hypothermia against axillary digital thermometry (ADT) by a trained non-medical field investigator (who supervised activities of community health volunteers) in seven villages of Agra district, Uttar Pradesh, India. Body temperature of 148 newborns born between March and August 2005 was measured at four points in time for each enrolled newborn (within 48 h and on days 7, 30 and 60) by the field investigator under the axilla using a digital thermometer and by HT method using standard methodology. Total observations were 533. Hypothermia assessed by HT was in agreement with that assessed by ADT (<36.5 degrees C) in 498 observations. Hypothermia assessed by HT showed a high diagnostic accuracy when compared against ADT (kappa 0.65-0.81; sensitivity 74%; specificity 96.7%; positive predictive value 22; negative predictive value 0.26). HT is a simple, quick, inexpensive and programmatically important method. However, being a subjective assessment, its reliability depends on the investigator being adequately trained and competent in making consistently accurate assessments. There is also a need to assess whether with training and supervision even the less literate mothers, traditional birth attendants and community health volunteers can accurately assess mild and moderate hypothermia before promoting HT for early identification of neonatal risk in community-based programs.
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
Rosewater, David; Ferreira, Summer; Schoenwald, David; ...
2018-01-25
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosewater, David; Ferreira, Summer; Schoenwald, David
Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less
HLA imputation in an admixed population: An assessment of the 1000 Genomes data as a training set.
Nunes, Kelly; Zheng, Xiuwen; Torres, Margareth; Moraes, Maria Elisa; Piovezan, Bruno Z; Pontes, Gerlandia N; Kimura, Lilian; Carnavalli, Juliana E P; Mingroni Netto, Regina C; Meyer, Diogo
2016-03-01
Methods to impute HLA alleles based on dense single nucleotide polymorphism (SNP) data provide a valuable resource to association studies and evolutionary investigation of the MHC region. The availability of appropriate training sets is critical to the accuracy of HLA imputation, and the inclusion of samples with various ancestries is an important pre-requisite in studies of admixed populations. We assess the accuracy of HLA imputation using 1000 Genomes Project data as a training set, applying it to a highly admixed Brazilian population, the Quilombos from the state of São Paulo. To assess accuracy, we compared imputed and experimentally determined genotypes for 146 samples at 4 HLA classical loci. We found imputation accuracies of 82.9%, 81.8%, 94.8% and 86.6% for HLA-A, -B, -C and -DRB1 respectively (two-field resolution). Accuracies were improved when we included a subset of Quilombo individuals in the training set. We conclude that the 1000 Genomes data is a valuable resource for construction of training sets due to the diversity of ancestries and the potential for a large overlap of SNPs with the target population. We also show that tailoring training sets to features of the target population substantially enhances imputation accuracy. Copyright © 2016 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.
Approaches to reducing photon dose calculation errors near metal implants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Jessie Y.; Followill, David S.; Howell, Reb
Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
NASA Astrophysics Data System (ADS)
Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet
2017-06-01
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
Smith, Toby O; Simpson, Michael; Ejindu, Vivian; Hing, Caroline B
2013-04-01
The purpose of this study was to assess the diagnostic test accuracy of magnetic resonance imaging (MRI), magnetic resonance arthrography (MRA) and multidetector arrays in CT arthrography (MDCT) for assessing chondral lesions in the hip joint. A review of the published and unpublished literature databases was performed to identify all studies reporting the diagnostic test accuracy (sensitivity/specificity) of MRI, MRA or MDCT for the assessment of adults with chondral (cartilage) lesions of the hip with surgical comparison (arthroscopic or open) as the reference test. All included studies were reviewed using the quality assessment of diagnostic accuracy studies appraisal tool. Pooled sensitivity, specificity, likelihood ratios and diagnostic odds ratios were calculated with 95 % confidence intervals using a random-effects meta-analysis for MRI, MRA and MDCT imaging. Eighteen studies satisfied the eligibility criteria. These included 648 hips from 637 patients. MRI indicated a pooled sensitivity of 0.59 (95 % CI: 0.49-0.70) and specificity of 0.94 (95 % CI: 0.90-0.97), and MRA sensitivity and specificity values were 0.62 (95 % CI: 0.57-0.66) and 0.86 (95 % CI: 0.83-0.89), respectively. The diagnostic test accuracy for the detection of hip joint cartilage lesions is currently superior for MRI compared with MRA. There were insufficient data to perform meta-analysis for MDCT or CTA protocols. Based on the current limited diagnostic test accuracy of the use of magnetic resonance or CT, arthroscopy remains the most accurate method of assessing chondral lesions in the hip joint.
Pseudorange error analysis for precise indoor positioning system
NASA Astrophysics Data System (ADS)
Pola, Marek; Bezoušek, Pavel
2017-05-01
There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.
On the accuracy of Whitham's method. [for steady ideal gas flow past cones
NASA Technical Reports Server (NTRS)
Zahalak, G. I.; Myers, M. K.
1974-01-01
The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.
NASA Astrophysics Data System (ADS)
Kemppainen, R.; Vaara, T.; Joensuu, T.; Kiljunen, T.
2018-03-01
Background and Purpose. Magnetic resonance imaging (MRI) has in recent years emerged as an imaging modality to drive precise contouring of targets and organs at risk in external beam radiation therapy. Moreover, recent advances in MRI enable treatment of cancer without computed tomography (CT) simulation. A commercially available MR-only solution, MRCAT, offers a single-modality approach that provides density information for dose calculation and generation of positioning reference images. We evaluated the accuracy of patient positioning based on MRCAT digitally reconstructed radiographs (DRRs) by comparing to standard CT based workflow. Materials and Methods. Twenty consecutive prostate cancer patients being treated with external beam radiation therapy were included in the study. DRRs were generated for each patient based on the planning CT and MRCAT. The accuracy assessment was performed by manually registering the DRR images to planar kV setup images using bony landmarks. A Bayesian linear mixed effects model was used to separate systematic and random components (inter- and intra-observer variation) in the assessment. In addition, method agreement was assessed using a Bland-Altman analysis. Results. The systematic difference between MRCAT and CT based patient positioning, averaged over the study population, were found to be (mean [95% CI]) -0.49 [-0.85 to -0.13] mm, 0.11 [-0.33 to +0.57] mm and -0.05 [-0.23 to +0.36] mm in vertical, longitudinal and lateral directions, respectively. The increases in total random uncertainty were estimated to be below 0.5 mm for all directions, when using MR-only workflow instead of CT. Conclusions. The MRCAT pseudo-CT method provides clinically acceptable accuracy and precision for patient positioning for pelvic radiation therapy based on planar DRR images. Furthermore, due to the reduction of geometric uncertainty, compared to dual-modality workflow, the approach is likely to improve the total geometric accuracy of pelvic radiation therapy.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
Perspectives of Maine Forest Cover Change from Landsat Imagery and Forest Inventory Analysis (FIA)
Steven Sader; Michael Hoppus; Jacob Metzler; Suming Jin
2005-01-01
A forest change detection map was developed to document forest gains and losses during the decade of the 1990s. The effectiveness of the Landsat imagery and methods for detecting Maine forest cover change are indicated by the good accuracy assessment results: forest-no change, forest loss, and forest gain accuracy were 90, 88, and 92% respectively, and the good...
ERIC Educational Resources Information Center
Argyropoulos, Vassilis; Papadimitriou, Vassilios
2015-01-01
Introduction: The present study assesses the performance of students who are visually impaired (that is, those who are blind or have low vision) in braille reading accuracy and examines potential correlations among the error categories on the basis of gender, age at loss of vision, and level of education. Methods: Twenty-one visually impaired…
Thomas, Cibu; Ye, Frank Q; Irfanoglu, M Okan; Modi, Pooja; Saleem, Kadharbatcha S; Leopold, David A; Pierpaoli, Carlo
2014-11-18
Tractography based on diffusion-weighted MRI (DWI) is widely used for mapping the structural connections of the human brain. Its accuracy is known to be limited by technical factors affecting in vivo data acquisition, such as noise, artifacts, and data undersampling resulting from scan time constraints. It generally is assumed that improvements in data quality and implementation of sophisticated tractography methods will lead to increasingly accurate maps of human anatomical connections. However, assessing the anatomical accuracy of DWI tractography is difficult because of the lack of independent knowledge of the true anatomical connections in humans. Here we investigate the future prospects of DWI-based connectional imaging by applying advanced tractography methods to an ex vivo DWI dataset of the macaque brain. The results of different tractography methods were compared with maps of known axonal projections from previous tracer studies in the macaque. Despite the exceptional quality of the DWI data, none of the methods demonstrated high anatomical accuracy. The methods that showed the highest sensitivity showed the lowest specificity, and vice versa. Additionally, anatomical accuracy was highly dependent upon parameters of the tractography algorithm, with different optimal values for mapping different pathways. These results suggest that there is an inherent limitation in determining long-range anatomical projections based on voxel-averaged estimates of local fiber orientation obtained from DWI data that is unlikely to be overcome by improvements in data acquisition and analysis alone.
ERIC Educational Resources Information Center
Luce, Christine; Kirnan, Jean P.
2016-01-01
Contradictory results have been reported regarding the accuracy of various methods used to assess student learning in higher education. The current study examined student learning outcomes across a multi-section and mult-iinstructor psychology research course with both indirect and direct assessments in a sample of 67 undergraduate students. The…
Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad
2018-02-01
Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.
Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1992-01-01
Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Greene, Barry R; Redmond, Stephen J; Caulfield, Brian
2017-05-01
Falls are the leading global cause of accidental death and disability in older adults and are the most common cause of injury and hospitalization. Accurate, early identification of patients at risk of falling, could lead to timely intervention and a reduction in the incidence of fall-related injury and associated costs. We report a statistical method for fall risk assessment using standard clinical fall risk factors (N = 748). We also report a means of improving this method by automatically combining it, with a fall risk assessment algorithm based on inertial sensor data and the timed-up-and-go test. Furthermore, we provide validation data on the sensor-based fall risk assessment method using a statistically independent dataset. Results obtained using cross-validation on a sample of 292 community dwelling older adults suggest that a combined clinical and sensor-based approach yields a classification accuracy of 76.0%, compared to either 73.6% for sensor-based assessment alone, or 68.8% for clinical risk factors alone. Increasing the cohort size by adding an additional 130 subjects from a separate recruitment wave (N = 422), and applying the same model building and validation method, resulted in a decrease in classification performance (68.5% for combined classifier, 66.8% for sensor data alone, and 58.5% for clinical data alone). This suggests that heterogeneity between cohorts may be a major challenge when attempting to develop fall risk assessment algorithms which generalize well. Independent validation of the sensor-based fall risk assessment algorithm on an independent cohort of 22 community dwelling older adults yielded a classification accuracy of 72.7%. Results suggest that the present method compares well to previously reported sensor-based fall risk assessment methods in assessing falls risk. Implementation of objective fall risk assessment methods on a large scale has the potential to improve quality of care and lead to a reduction in associated hospital costs, due to fewer admissions and reduced injuries due to falling.
The Pot Calling the Kettle Black? A Comparison of Measures of Current Tobacco Use
ROSENMAN, ROBERT
2014-01-01
Researchers often use the discrepancy between self-reported and biochemically assessed active smoking status to argue that self-reported smoking status is not reliable, ignoring the limitations of biochemically assessed measures and treating it as the gold standard in their comparisons. Here, we employ econometric techniques to compare the accuracy of self-reported and biochemically assessed current tobacco use, taking into account measurement errors with both methods. Our approach allows estimating and comparing the sensitivity and specificity of each measure without directly observing true smoking status. The results, robust to several alternative specifications, suggest that there is no clear reason to think that one measure dominates the other in accuracy. PMID:25587199
NASA Astrophysics Data System (ADS)
Zhu, Yunqiang; Zhu, Huazhong; Lu, Heli; Ni, Jianguang; Zhu, Shaoxia
2005-10-01
Remote sensing dynamic monitoring of land use can detect the change information of land use and update the current land use map, which is important for rational utilization and scientific management of land resources. This paper discusses the technological procedure of remote sensing dynamic monitoring of land use including the process of remote sensing images, the extraction of annual change information of land use, field survey, indoor post processing and accuracy assessment. Especially, we emphasize on comparative research on the choice of remote sensing rectifying models, image fusion algorithms and accuracy assessment methods. Taking Anning district in Lanzhou as an example, we extract the land use change information of the district during 2002-2003, access monitoring accuracy and analyze the reason of land use change.
Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data
NASA Astrophysics Data System (ADS)
Elhag, Mohamed; Boteva, Silvena
2016-10-01
Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.
Jedenmalm, Anneli; Noz, Marilyn E; Olivecrona, Henrik; Olivecrona, Lotta; Stark, Andre
2008-04-01
Polyethylene wear is an important cause of aseptic loosening in hip arthroplasty. Detection of significant wear usually happens late on, since available diagnostic techniques are either not sensitive enough or too complicated and expensive for routine use. This study evaluates a new approach for measurement of linear wear of metal-backed acetabular cups using CT as the intended clinically feasible method. 8 retrieved uncemented metal-backed acetabular cups were scanned twice ex vivo using CT. The linear penetration depth of the femoral head into the cup was measured in the CT volumes using dedicated software. Landmark points were placed on the CT images of cup and head, and also on a reference plane in order to calculate the wear vector magnitude and angle to one of the axes. A coordinate-measuring machine was used to test the accuracy of the proposed CT method. For this purpose, the head diameters were also measured by both methods. Accuracy of the CT method for linear wear measurements was 0.6 mm and wear vector angle was 27 degrees . No systematic difference was found between CT scans. This study on explanted acetabular cups shows that CT is capable of reliable measurement of linear wear in acetabular cups at a clinically relevant level of accuracy. It was also possible to use the method for assessment of direction of wear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martens, Milou H., E-mail: mh.martens@hotmail.com; Department of Surgery, Maastricht University Medical Center, Maastricht; GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht
2015-12-01
Purpose: To review the available literature on tumor size/volume measurements on magnetic resonance imaging for response assessment after chemoradiotherapy, and validate these cut-offs in an independent multicenter patient cohort. Methods and Materials: The study included 2 parts. (1) Review of the literature: articles were included that assessed the accuracy of tumor size/volume measurements on magnetic resonance imaging for tumor response assessment. Size/volume cut-offs were extracted; (2) Multicenter validation: extracted cut-offs from the literature were tested in a multicenter cohort (n=146). Accuracies were calculated and compared with reported results from the literature. Results: The review included 14 articles, in which 3more » different measurement methods were assessed: (1) tumor length; (2) 3-dimensonial tumor size; and (3) whole volume. Study outcomes consisted of (1) complete response (ypT0) versus residual tumor; (2) tumor regression grade 1 to 2 versus 3 to 5; and (3) T-downstaging (ypT« less
Pardo, Scott; Simmons, David A
2016-09-01
The relationship between International Organization for Standardization (ISO) accuracy criteria and mean absolute relative difference (MARD), 2 methods for assessing the accuracy of blood glucose meters, is complex. While lower MARD values are generally better than higher MARD values, it is not possible to define a particular MARD value that ensures a blood glucose meter will satisfy the ISO accuracy criteria. The MARD value that ensures passing the ISO accuracy test can be described only as a probabilistic range. In this work, a Bayesian model is presented to represent the relationship between ISO accuracy criteria and MARD. Under the assumptions made in this work, there is nearly a 100% chance of satisfying ISO 15197:2013 accuracy requirements if the MARD value is between 3.25% and 5.25%. © 2016 Diabetes Technology Society.
Hsu, David
2015-09-27
Clustering methods are often used to model energy consumption for two reasons. First, clustering is often used to process data and to improve the predictive accuracy of subsequent energy models. Second, stable clusters that are reproducible with respect to non-essential changes can be used to group, target, and interpret observed subjects. However, it is well known that clustering methods are highly sensitive to the choice of algorithms and variables. This can lead to misleading assessments of predictive accuracy and mis-interpretation of clusters in policymaking. This paper therefore introduces two methods to the modeling of energy consumption in buildings: clusterwise regression,more » also known as latent class regression, which integrates clustering and regression simultaneously; and cluster validation methods to measure stability. Using a large dataset of multifamily buildings in New York City, clusterwise regression is compared to common two-stage algorithms that use K-means and model-based clustering with linear regression. Predictive accuracy is evaluated using 20-fold cross validation, and the stability of the perturbed clusters is measured using the Jaccard coefficient. These results show that there seems to be an inherent tradeoff between prediction accuracy and cluster stability. This paper concludes by discussing which clustering methods may be appropriate for different analytical purposes.« less
Westwood, A; Bullock, D G; Whitehead, T P
1986-01-01
Hexokinase methods for serum glucose assay appeared to give slightly but consistently higher inter-laboratory coefficients of variation than all methods combined in the UK External Quality Assessment Scheme; their performance over a two-year period was therefore compared with that for three groups of glucose oxidase methods. This assessment showed no intrinsic inferiority in the hexokinase method. The greater variation may be due to the more heterogeneous group of instruments, particularly discrete analysers, on which the method is used. The Beckman Glucose Analyzer and Astra group (using a glucose oxidase method) showed the least inter-laboratory variability but also the lowest mean value. No comment is offered on the absolute accuracy of any of the methods.
Williams, John; Bialer, Meir; Johannessen, Svein I; Krämer, Günther; Levy, René; Mattson, Richard H; Perucca, Emilio; Patsalos, Philip N; Wilson, John F
2003-01-01
To assess interlaboratory variability in the determination of serum levels of new antiepileptic drugs (AEDs). Lyophilised serum samples containing clinically relevant concentrations of felbamate (FBM), gabapentin (GBP), lamotrigine (LTG), the monohydroxy derivative of oxcarbazepine (OCBZ; MHD), tiagabine (TGB), topiramate (TPM), and vigabatrin (VGB) were distributed monthly among 70 laboratories participating in the international Heathcontrol External Quality Assessment Scheme (EQAS). Assay results returned over a 15-month period were evaluated for precision and accuracy. The most frequently measured compound was LTG (65), followed by MHD (39), GBP (19), TPM (18), VGB (15), FBM (16), and TGB (8). High-performance liquid chromatography was the most commonly used assay technique for all drugs except for TPM, for which two thirds of laboratories used a commercial immunoassay. For all assay methods combined, precision was <11% for MHD, FBM, TPM, and LTG, close to 15% for GBP and VGB, and as high as 54% for TGB (p < 0.001). Mean accuracy values were <10% for all drugs other than TGB, for which measured values were on average 13.9% higher than spiked values, with a high variability around the mean (45%). No differences in precision and accuracy were found between methods, except for TPM, for which gas chromatography showed poorer accuracy compared with immunoassay and gas chromatography-mass spectrometry. With the notable exception of TGB, interlaboratory variability in the determination of new AEDs was comparable to that reported with older-generation agents. Poor assay performance is related more to individual operators than to the intrinsic characteristics of the method applied. Participation in an EQAS scheme is recommended to ensure adequate control of assay variability in therapeutic drug monitoring.
Bedside red cell volumetry by low-dose carboxyhaemoglobin dilution using expiratory gas analysis.
Sawano, M; Mato, T; Tsutsumi, H
2006-02-01
We developed a non-invasive, continuous, high-resolution method of measuring carboxyhaemoglobin fraction (COHb%) using expiratory gas analysis (EGA). We assessed whether application of EGA to carboxyhaemoglobin dilution provides red cell volume (RCV) measurement with accuracy equivalent to that of CO-haemoximetry, with a smaller infusion volume of carbon-monoxide-saturated autologous blood (COB). Method. We assessed the agreement between repeated COHb% measurements by EGA and simultaneous measurement by CO-haemoximetry, using Bland and Altman plot, in healthy subjects and patients with artificially controlled ventilation and no radiological evidence of pulmonary oedema or atelectasis. We assessed the agreement between RCV measurements by EGA with infusion of 20 ml of COB (RCVEGA) and RCV measurements by CO-haemoximetry with infusion of 100 ml of COB (RCVHEM), in healthy subjects. The 'limits of agreement' between COHb% measurement by EGA (1 min average) and CO-haemoximetry were -0.09 and 0.08% in healthy subjects, and -0.11 and 0.09% in patients. Given the resolution of CO-haemoximetry (0.1%), the accuracy of EGA was equivalent to or greater than that of CO-haemoximetry. The 'limits of agreement' between RCVEGA and RCVHEM were -0.14 and 0.15 litre. Given the average resolution of RCVHEM (0.14 litre), the accuracy of RCVEGA was equivalent to that of RCVHEM. EGA provided non-invasive, accurate, continuous, high-resolution COHb% measurements. Applying EGA to carboxyhaemoglobin dilution, we achieved RCV measurements with accuracy equivalent to that of CO-haemoximetry, with one-fifth of the COB infusion volume. However, clinical application of the method is limited to patients with no radiological evidence of pulmonary oedema or atelectasis.
Advantages of multigrid methods for certifying the accuracy of PDE modeling
NASA Technical Reports Server (NTRS)
Forester, C. K.
1981-01-01
Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.
Linden, Ariel
2006-04-01
Diagnostic or predictive accuracy concerns are common in all phases of a disease management (DM) programme, and ultimately play an influential role in the assessment of programme effectiveness. Areas, such as the identification of diseased patients, predictive modelling of future health status and costs and risk stratification, are just a few of the domains in which assessment of accuracy is beneficial, if not critical. The most commonly used analytical model for this purpose is the standard 2 x 2 table method in which sensitivity and specificity are calculated. However, there are several limitations to this approach, including the reliance on a single defined criterion or cut-off for determining a true-positive result, use of non-standardized measurement instruments and sensitivity to outcome prevalence. This paper introduces the receiver operator characteristic (ROC) analysis as a more appropriate and useful technique for assessing diagnostic and predictive accuracy in DM. Its advantages include; testing accuracy across the entire range of scores and thereby not requiring a predetermined cut-off point, easily examined visual and statistical comparisons across tests or scores, and independence from outcome prevalence. Therefore the implementation of ROC as an evaluation tool should be strongly considered in the various phases of a DM programme.
Food Photography Is Not an Accurate Measure of Energy Intake in Obese, Pregnant Women.
Most, Jasper; Vallo, Porsha M; Altazan, Abby D; Gilmore, Linda Anne; Sutton, Elizabeth F; Cain, Loren E; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M
2018-04-01
To improve weight management in pregnant women, there is a need to deliver specific, data-based recommendations on energy intake. This cross-sectional study evaluated the accuracy of an electronic reporting method to measure daily energy intake in pregnant women compared with total daily energy expenditure (TDEE). Twenty-three obese [mean ± SEM body mass index (kg/m2): 36.9 ± 1.3] pregnant women (aged 28.3 ±1.1 y) used a smartphone application to capture images of their food selection and plate waste in free-living conditions for ≥6 d in early (13-16 wk) and late (35-37 wk) pregnancy. Energy intake was evaluated by the smartphone application SmartIntake and compared with simultaneous assessment of TDEE obtained by doubly labeled water. Accuracy was defined as reported energy intake compared with TDEE (percentage of TDEE). Ecological momentary assessment prompts were used to enhance data reporting. Two-one-sided t tests for the 2 methods were used to assess equivalency, which was considered significant when accuracy was >80%. Energy intake reported by the SmartIntake application was 63.4% ± 2.3% of TDEE measured by doubly labeled water (P = 1.00). Energy intake reported as snacks accounted for 17% ± 2% of reported energy intake. Participants who used their own phones compared with participants who used borrowed phones captured more images (P = 0.04) and had higher accuracy (73% ± 3% compared with 60% ± 3% of TDEE; P = 0.01). Reported energy intake as snacks was significantly associated with the accuracy of SmartIntake (P = 0.03). To improve data quality, excluding erroneous days of likely underreporting (<60% TDEE) improved the accuracy of SmartIntake, yet this was not equivalent to TDEE (-22% ± 1% of TDEE; P = 1.00). Energy intake in obese, pregnant women obtained with the use of an electronic reporting method (SmartIntake) does not accurately estimate energy intake compared with doubly labeled water. However, accuracy improves by applying criteria to eliminate erroneous data. Further evaluation of electronic reporting in this population is needed to improve compliance, specifically for reporting frequent intake of small meals. This trial was registered at www.clinicaltrials.gov as NCT01954342.
Evaluation of design flood frequency methods for Iowa streams : final report, June 2009.
DOT National Transportation Integrated Search
2009-06-01
The objective of this project was to assess the predictive accuracy of flood frequency estimation for small Iowa streams based : on the Rational Method, the NRCS curve number approach, and the Iowa Runoff Chart. The evaluation was based on : comparis...
A method for assessing the accuracy of surgical technique in the correction of astigmatism.
Kaye, S B; Campbell, S H; Davey, K; Patterson, A
1992-12-01
Surgical results can be assessed as a function of what was aimed for, what was done, and what was achieved. One of the aims of refractive surgery is to reduce astigmatism; the smaller the postoperative astigmatism the better the result. Determination of what was done--that is, the surgical effect, can be calculated from the preoperative and postoperative astigmatism. A simplified formulation is described which facilitates the calculation (magnitude and direction) of this surgical effect. In addition, an expression for surgical accuracy is described, as a function of what was aimed for and what was achieved.
OpenMEEG: opensource software for quasistatic bioelectromagnetics
2010-01-01
Background Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. Methods We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. Results We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. Conclusions This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort. PMID:20819204
Mordini, Federico E; Haddad, Tariq; Hsu, Li-Yueh; Kellman, Peter; Lowrey, Tracy B; Aletras, Anthony H; Bandettini, W Patricia; Arai, Andrew E
2014-01-01
This study's primary objective was to determine the sensitivity, specificity, and accuracy of fully quantitative stress perfusion cardiac magnetic resonance (CMR) versus a reference standard of quantitative coronary angiography. We hypothesized that fully quantitative analysis of stress perfusion CMR would have high diagnostic accuracy for identifying significant coronary artery stenosis and exceed the accuracy of semiquantitative measures of perfusion and qualitative interpretation. Relatively few studies apply fully quantitative CMR perfusion measures to patients with coronary disease and comparisons to semiquantitative and qualitative methods are limited. Dual bolus dipyridamole stress perfusion CMR exams were performed in 67 patients with clinical indications for assessment of myocardial ischemia. Stress perfusion images alone were analyzed with a fully quantitative perfusion (QP) method and 3 semiquantitative methods including contrast enhancement ratio, upslope index, and upslope integral. Comprehensive exams (cine imaging, stress/rest perfusion, late gadolinium enhancement) were analyzed qualitatively with 2 methods including the Duke algorithm and standard clinical interpretation. A 70% or greater stenosis by quantitative coronary angiography was considered abnormal. The optimum diagnostic threshold for QP determined by receiver-operating characteristic curve occurred when endocardial flow decreased to <50% of mean epicardial flow, which yielded a sensitivity of 87% and specificity of 93%. The area under the curve for QP was 92%, which was superior to semiquantitative methods: contrast enhancement ratio: 78%; upslope index: 82%; and upslope integral: 75% (p = 0.011, p = 0.019, p = 0.004 vs. QP, respectively). Area under the curve for QP was also superior to qualitative methods: Duke algorithm: 70%; and clinical interpretation: 78% (p < 0.001 and p < 0.001 vs. QP, respectively). Fully quantitative stress perfusion CMR has high diagnostic accuracy for detecting obstructive coronary artery disease. QP outperforms semiquantitative measures of perfusion and qualitative methods that incorporate a combination of cine, perfusion, and late gadolinium enhancement imaging. These findings suggest a potential clinical role for quantitative stress perfusion CMR. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Evaluation of methods for managing censored results when calculating the geometric mean.
Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M
2018-01-01
Currently, there are conflicting views on the best statistical methods for managing censored environmental data. The method commonly applied by environmental science researchers and professionals is to substitute half the limit of reporting for derivation of summary statistics. This approach has been criticised by some researchers, raising questions around the interpretation of historical scientific data. This study evaluated four complete soil datasets, at three levels of simulated censorship, to test the accuracy of a range of censored data management methods for calculation of the geometric mean. The methods assessed included removal of censored results, substitution of a fixed value (near zero, half the limit of reporting and the limit of reporting), substitution by nearest neighbour imputation, maximum likelihood estimation, regression on order substitution and Kaplan-Meier/survival analysis. This is the first time such a comprehensive range of censored data management methods have been applied to assess the accuracy of calculation of the geometric mean. The results of this study show that, for describing the geometric mean, the simple method of substitution of half the limit of reporting is comparable or more accurate than alternative censored data management methods, including nearest neighbour imputation methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cohen, Wayne R; Hayes-Gill, Barrie
2014-06-01
To evaluate the performance of external electronic fetal heart rate and uterine contraction monitoring according to maternal body mass index. Secondary analysis of prospective equivalence study. Three US urban teaching hospitals. Seventy-four parturients with a normal term pregnancy. The parent study assessed performance of two methods of external fetal heart rate monitoring (abdominal fetal electrocardiogram and Doppler ultrasound) and of uterine contraction monitoring (electrohystero-graphy and tocodynamometry) compared with internal monitoring with fetal scalp electrode and intrauterine pressure transducer. Reliability of external techniques was assessed by the success rate and positive percent agreement with internal methods. Bland-Altman analysis determined accuracy. We analyzed data from that study according to maternal body mass index. We assessed the relationship between body mass index and monitor performance with linear regression, using body mass index as the independent variable and measures of reliability and accuracy as dependent variables. There was no significant association between maternal body mass index and any measure of reliability or accuracy for abdominal fetal electrocardiogram. By contrast, the overall positive percent agreement for Doppler ultrasound declined (p = 0.042), and the root mean square error from the Bland-Altman analysis increased in the first stage (p = 0.029) with increasing body mass index. Uterine contraction recordings from electrohysterography and tocodynamometry showed no significant deterioration related to maternal body mass index. Accuracy and reliability of fetal heart rate monitoring using abdominal fetal electrocardiogram was unaffected by maternal obesity, whereas performance of ultrasound degraded directly with maternal size. Both electrohysterography and tocodynamometry were unperturbed by obesity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.
Quantitative falls risk estimation through multi-sensor assessment of standing balance.
Greene, Barry R; McGrath, Denise; Walsh, Lorcan; Doheny, Emer P; McKeown, David; Garattini, Chiara; Cunningham, Clodagh; Crosby, Lisa; Caulfield, Brian; Kenny, Rose A
2012-12-01
Falls are the most common cause of injury and hospitalization and one of the principal causes of death and disability in older adults worldwide. Measures of postural stability have been associated with the incidence of falls in older adults. The aim of this study was to develop a model that accurately classifies fallers and non-fallers using novel multi-sensor quantitative balance metrics that can be easily deployed into a home or clinic setting. We compared the classification accuracy of our model with an established method for falls risk assessment, the Berg balance scale. Data were acquired using two sensor modalities--a pressure sensitive platform sensor and a body-worn inertial sensor, mounted on the lower back--from 120 community dwelling older adults (65 with a history of falls, 55 without, mean age 73.7 ± 5.8 years, 63 female) while performing a number of standing balance tasks in a geriatric research clinic. Results obtained using a support vector machine yielded a mean classification accuracy of 71.52% (95% CI: 68.82-74.28) in classifying falls history, obtained using one model classifying all data points. Considering male and female participant data separately yielded classification accuracies of 72.80% (95% CI: 68.85-77.17) and 73.33% (95% CI: 69.88-76.81) respectively, leading to a mean classification accuracy of 73.07% in identifying participants with a history of falls. Results compare favourably to those obtained using the Berg balance scale (mean classification accuracy: 59.42% (95% CI: 56.96-61.88)). Results from the present study could lead to a robust method for assessing falls risk in both supervised and unsupervised environments.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
The Accuracy Of Fuzzy Sugeno Method With Antropometry On Determination Natural Patient Status
NASA Astrophysics Data System (ADS)
Syahputra, Dinur; Tulus; Sawaluddin
2017-12-01
Anthropometry is one of the processes that can be used to assess nutritional status. In general anthropometry is defined as body size in terms of nutrition, then anthropometry is reviewed from various age levels and nutritional levels. Nutritional status is a description of the balance between nutritional intake with the needs of the body individually. Fuzzy logic is a logic that has a vagueness between right and wrong or between 0 and 1. Sugeno method is used because in the process of calculating nutritional status so far is still done by anthropometry. Currently information technology is growing in any aspect, one of them in the aspect of calculation with data taken from anthropometry. In this case the calculation can use the Fuzzy Sugeno Method, in order to know the great accuracy obtained. Then the results obtained using fuzzy sugeno integrated with anthropometry has an accuracy of 81.48%.
A Global Optimization Methodology for Rocket Propulsion Applications
NASA Technical Reports Server (NTRS)
2001-01-01
While the response surface method is an effective method in engineering optimization, its accuracy is often affected by the use of limited amount of data points for model construction. In this chapter, the issues related to the accuracy of the RS approximations and possible ways of improving the RS model using appropriate treatments, including the iteratively re-weighted least square (IRLS) technique and the radial-basis neural networks, are investigated. A main interest is to identify ways to offer added capabilities for the RS method to be able to at least selectively improve the accuracy in regions of importance. An example is to target the high efficiency region of a fluid machinery design space so that the predictive power of the RS can be maximized when it matters most. Analytical models based on polynomials, with controlled level of noise, are used to assess the performance of these techniques.
Assessing Sitting across Contexts: Development of the Multicontext Sitting Time Questionnaire
ERIC Educational Resources Information Center
Whitfield, Geoffrey P.; Pettee Gabriel, Kelley K.; Kohl, Harold W., III.
2013-01-01
Purpose: To describe the development and preliminary evaluation of the Multicontext Sitting Time Questionnaire (MSTQ). Method: During development of the MSTQ, contexts and domains of sitting behavior were utilized as recall cues to improve the accuracy of sitting assessment. The terms "workday" and "nonworkday" were used to…
Internal Medicine Residents Do Not Accurately Assess Their Medical Knowledge
ERIC Educational Resources Information Center
Jones, Roger; Panda, Mukta; Desbiens, Norman
2008-01-01
Background: Medical knowledge is essential for appropriate patient care; however, the accuracy of internal medicine (IM) residents' assessment of their medical knowledge is unknown. Methods: IM residents predicted their overall percentile performance 1 week (on average) before and after taking the in-training exam (ITE), an objective and well…
In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must ...
ERIC Educational Resources Information Center
Seely, Sara Robertson; Fry, Sara Winstead; Ruppel, Margie
2011-01-01
An investigation into preservice teachers' information evaluation skills at a large university suggests that formative assessment can improve student achievement. Preservice teachers were asked to apply information evaluation skills in the areas of currency, relevancy, authority, accuracy, and purpose. The study used quantitative methods to assess…
Skinfold Assessment: Accuracy and Application
ERIC Educational Resources Information Center
Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.
2006-01-01
Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…
NASA Astrophysics Data System (ADS)
Lotfy, Hayam Mahmoud; Hegazy, Maha Abdel Monem
2013-09-01
Four simple, specific, accurate and precise spectrophotometric methods manipulating ratio spectra were developed and validated for simultaneous determination of simvastatin (SM) and ezetimibe (EZ) namely; extended ratio subtraction (EXRSM), simultaneous ratio subtraction (SRSM), ratio difference (RDSM) and absorption factor (AFM). The proposed spectrophotometric procedures do not require any preliminary separation step. The accuracy, precision and linearity ranges of the proposed methods were determined, and the methods were validated and the specificity was assessed by analyzing synthetic mixtures containing the cited drugs. The four methods were applied for the determination of the cited drugs in tablets and the obtained results were statistically compared with each other and with those of a reported HPLC method. The comparison showed that there is no significant difference between the proposed methods and the reported method regarding both accuracy and precision.
Hasslacher, Christoph; Kulozik, Felix; Platten, Isabel
2014-05-01
We investigated the analytical accuracy of 27 glucose monitoring systems (GMS) in a clinical setting, using the new ISO accuracy limits. In addition to measuring accuracy at blood glucose (BG) levels < 100 mg/dl and > 100 mg/dl, we also analyzed devices performance with respect to these criteria at 5 specific BG level ranges, making it possible to further differentiate between devices with regard to overall performance. Carbohydrate meals and insulin injections were used to induce an increase or decrease in BG levels in 37 insulin-dependent patients. Capillary blood samples were collected at 10-minute intervals, and BG levels determined simultaneously using GMS and a laboratory-based method. Results obtained via both methods were analyzed according to the new ISO criteria. Only 12 of 27 devices tested met overall requirements of the new ISO accuracy limits. When accuracy was assessed at BG levels < 100 mg/dl and > 100 mg/dl, criteria were met by 14 and 13 devices, respectively. A more detailed analysis involving 5 different BG level ranges revealed that 13 (48.1%) devices met the required criteria at BG levels between 50 and 150 mg/dl, whereas 19 (70.3%) met these criteria at BG levels above 250 mg/dl. The overall frequency of outliers was low. The assessment of analytical accuracy of GMS at a number of BG level ranges made it possible to further differentiate between devices with regard to overall performance, a process that is of particular importance given the user-centered nature of the devices' intended use. © 2014 Diabetes Technology Society.
Lebel, Karina; Hamel, Mathieu; Duval, Christian; Nguyen, Hung; Boissy, Patrick
2018-01-01
Joint kinematics can be assessed using orientation estimates from Attitude and Heading Reference Systems (AHRS). However, magnetically-perturbed environments affect the accuracy of the estimated orientations. This study investigates, both in controlled and human mobility conditions, a trial calibration technic based on a 2D photograph with a pose estimation algorithm to correct initial difference in AHRS Inertial reference frames and improve joint angle accuracy. In controlled conditions, two AHRS were solidly affixed onto a wooden stick and a series of static and dynamic trials were performed in varying environments. Mean accuracy of relative orientation between the two AHRS was improved from 24.4° to 2.9° using the proposed correction method. In human conditions, AHRS were placed on the shank and the foot of a participant who performed repeated trials of straight walking and walking while turning, varying the level of magnetic perturbation in the starting environment and the walking speed. Mean joint orientation accuracy went from 6.7° to 2.8° using the correction algorithm. The impact of starting environment was also greatly reduced, up to a point where one could consider it as non-significant from a clinical point of view (maximum mean difference went from 8° to 0.6°). The results obtained demonstrate that the proposed method improves significantly the mean accuracy of AHRS joint orientation estimations in magnetically-perturbed environments and can be implemented in post processing of AHRS data collected during biomechanical evaluation of motion. Copyright © 2017 Elsevier B.V. All rights reserved.
2017-01-01
Background The accuracy of radiographic methods for dental age estimation is important for biological growth research and forensic applications. Accuracy of the two most commonly used systems (Demirjian and Willems) has been evaluated with conflicting results. This study investigates the accuracies of these methods for dental age estimation in different populations. Methods A search of PubMed, Scopus, Ovid, Database of Open Access Journals and Google Scholar was undertaken. Eligible studies published before December 28, 2016 were reviewed and analyzed. Meta-analysis was performed on 28 published articles using the Demirjian and/or Willems methods to estimate chronological age in 14,109 children (6,581 males, 7,528 females) age 3–18 years in studies using Demirjian’s method and 10,832 children (5,176 males, 5,656 females) age 4–18 years in studies using Willems’ method. The weighted mean difference at 95% confidence interval was used to assess accuracies of the two methods in predicting the chronological age. Results The Demirjian method significantly overestimated chronological age (p<0.05) in males age 3–15 and females age 4–16 when studies were pooled by age cohorts and sex. The majority of studies using Willems’ method did not report significant overestimation of ages in either sex. Overall, Demirjian’s method significantly overestimated chronological age compared to the Willems method (p<0.05). The weighted mean difference for the Demirjian method was 0.62 for males and 0.72 for females, while that of the Willems method was 0.26 for males and 0.29 for females. Conclusion The Willems method provides more accurate estimation of chronological age in different populations, while Demirjian’s method has a broad application in terms of determining maturity scores. However, accuracy of Demirjian age estimations is confounded by population variation when converting maturity scores to dental ages. For highest accuracy of age estimation, population-specific standards, rather than a universal standard or methods developed on other populations, need to be employed. PMID:29117240
Low-Storage, Explicit Runge-Kutta Schemes for the Compressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Kennedy, Chistopher A.; Carpenter, Mark H.; Lewis, R. Michael
1999-01-01
The derivation of storage explicit Runge-Kutta (ERK) schemes has been performed in the context of integrating the compressible Navier-Stokes equations via direct numerical simulation. Optimization of ERK methods is done across the broad range of properties, such as stability and accuracy efficiency, linear and nonlinear stability, error control reliability, step change stability, and dissipation/dispersion accuracy, subject to varying degrees of memory economization. Following van der Houwen and Wray, 16 ERK pairs are presented using from two to five registers of memory per equation, per grid point and having accuracies from third- to fifth-order. Methods have been assessed using the differential equation testing code DETEST, and with the 1D wave equation. Two of the methods have been applied to the DNS of a compressible jet as well as methane-air and hydrogen-air flames. Derived 3(2) and 4(3) pairs are competitive with existing full-storage methods. Although a substantial efficiency penalty accompanies use of two- and three-register, fifth-order methods, the best contemporary full-storage methods can be pearl), matched while still saving two to three registers of memory.
Characterization and delineation of caribou habitat on Unimak Island using remote sensing techniques
NASA Astrophysics Data System (ADS)
Atkinson, Brain M.
The assessment of herbivore habitat quality is traditionally based on quantifying the forages available to the animal across their home range through ground-based techniques. While these methods are highly accurate, they can be time-consuming and highly expensive, especially for herbivores that occupy vast spatial landscapes. The Unimak Island caribou herd has been decreasing in the last decade at rates that have prompted discussion of management intervention. Frequent inclement weather in this region of Alaska has provided for little opportunity to study the caribou forage habitat on Unimak Island. The overall objectives of this study were two-fold 1) to assess the feasibility of using high-resolution color and near-infrared aerial imagery to map the forage distribution of caribou habitat on Unimak Island and 2) to assess the use of a new high-resolution multispectral satellite imagery platform, RapidEye, and use of the "red-edge" spectral band on vegetation classification accuracy. Maximum likelihood classification algorithms were used to create land cover maps in aerial and satellite imagery. Accuracy assessments and transformed divergence values were produced to assess vegetative spectral information and classification accuracy. By using RapidEye and aerial digital imagery in a hierarchical supervised classification technique, we were able to produce a high resolution land cover map of Unimak Island. We obtained overall accuracy rates of 71.4 percent which are comparable to other land cover maps using RapidEye imagery. The "red-edge" spectral band included in the RapidEye imagery provides additional spectral information that allows for a more accurate overall classification, raising overall accuracy 5.2 percent.
Impact of cause of death adjudication on the results of the European prostate cancer screening trial
Walter, Stephen D; de Koning, Harry J; Hugosson, Jonas; Talala, Kirsi; Roobol, Monique J; Carlsson, Sigrid; Zappa, Marco; Nelen, Vera; Kwiatkowski, Maciej; Páez, Álvaro; Moss, Sue; Auvinen, Anssi
2017-01-01
Background: The European Randomised Study of Prostate Cancer Screening has shown a 21% relative reduction in prostate cancer mortality at 13 years. The causes of death can be misattributed, particularly in elderly men with multiple comorbidities, and therefore accurate assessment of the underlying cause of death is crucial for valid results. To address potential unreliability of end-point assessment, and its possible impact on mortality results, we analysed the study outcome adjudication data in six countries. Methods: Latent class statistical models were formulated to compare the accuracy of individual adjudicators, and to assess whether accuracy differed between the trial arms. We used the model to assess whether correcting for adjudication inaccuracies might modify the study results. Results: There was some heterogeneity in adjudication accuracy of causes of death, but no consistent differential accuracy by trial arm. Correcting the estimated screening effect for misclassification did not alter the estimated mortality effect of screening. Conclusions: Our findings were consistent with earlier reports on the European screening trial. Observer variation, while demonstrably present, is unlikely to have materially biased the main study results. A bias in assigning causes of death that might have explained the mortality reduction by screening can be effectively ruled out. PMID:27855442
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Ahunbay, E; Li, X
Purpose: With introduction of high-quality treatment imaging during radiation therapy (RT) delivery, e.g., MR-Linac, adaptive replanning of either online or offline becomes appealing. Dose accumulation of delivered fractions, a prerequisite for the adaptive replanning, can be cumbersome and inaccurate. The purpose of this work is to develop an automated process to accumulate daily doses and to assess the dose accumulation accuracy voxel-by-voxel for adaptive replanning. Methods: The process includes the following main steps: 1) reconstructing daily dose for each delivered fraction with a treatment planning system (Monaco, Elekta) based on the daily images using machine delivery log file and consideringmore » patient repositioning if applicable, 2) overlaying the daily dose to the planning image based on deformable image registering (DIR) (ADMIRE, Elekta), 3) assessing voxel dose deformation accuracy based on deformation field using predetermined criteria, and 4) outputting accumulated dose and dose-accuracy volume histograms and parameters. Daily CTs acquired using a CT-on-rails during routine CT-guided RT for sample patients with head and neck and prostate cancers were used to test the process. Results: Daily and accumulated doses (dose-volume histograms, etc) along with their accuracies (dose-accuracy volume histogram) can be robustly generated using the proposed process. The test data for a head and neck cancer case shows that the gross tumor volume decreased by 20% towards the end of treatment course, and the parotid gland mean dose increased by 10%. Such information would trigger adaptive replanning for the subsequent fractions. The voxel-based accuracy in the accumulated dose showed that errors in accumulated dose near rigid structures were small. Conclusion: A procedure as well as necessary tools to automatically accumulate daily dose and assess dose accumulation accuracy is developed and is useful for adaptive replanning. Partially supported by Elekta, Inc.« less
Graph-based inductive reasoning.
Boumans, Marcel
2016-10-01
This article discusses methods of inductive inferences that are methods of visualizations designed in such a way that the "eye" can be employed as a reliable tool for judgment. The term "eye" is used as a stand-in for visual cognition and perceptual processing. In this paper "meaningfulness" has a particular meaning, namely accuracy, which is closeness to truth. Accuracy consists of precision and unbiasedness. Precision is dealt with by statistical methods, but for unbiasedness one needs expert judgment. The common view at the beginning of the twentieth century was to make the most efficient use of this kind of judgment by representing the data in shapes and forms in such a way that the "eye" can function as a reliable judge to reduce bias. The need for judgment of the "eye" is even more necessary when the background conditions of the observations are heterogeneous. Statistical procedures require a certain minimal level of homogeneity, but the "eye" does not. The "eye" is an adequate tool for assessing topological similarities when, due to heterogeneity of the data, metric assessment is not possible. In fact, graphical assessments precedes measurement, or to put it more forcefully, the graphic method is a necessary prerequisite for measurement. Copyright © 2016 Elsevier Ltd. All rights reserved.
Systolic Blood Pressure Accuracy Enhancement in the Electronic Palpation Method Using Pulse Waveform
2001-10-25
adrenalin) or vasodilating (Nipride or Nitromex) medicines. Also painkillers and anesthetics (Oxanest, Diprivan, Fentanyl and Rapifen) may have affected...the measurements. It is hard to distinguish the effects of medication and assess their relation to blood pressure errors and pulse shapes...CONCLUSION During this study, 51 cardiac operated patients were measured to define the effects of arterial stiffening on the accuracy of the
Predicting juvenile recidivism: new method, old problems.
Benda, B B
1987-01-01
This prediction study compared three statistical procedures for accuracy using two assessment methods. The criterion is return to a juvenile prison after the first release, and the models tested are logit analysis, predictive attribute analysis, and a Burgess procedure. No significant differences are found between statistics in prediction.
AUDIT MATERIALS FOR SEMIVOLATILE ORGANIC MEASUREMENTS DURING HAZARDOUS WASTE TRIAL BURNS
Two new performance audit materials utilizing different sorbents have neen developed to assess the overall accuracy and precision of the sampling, desorption, and analysis of semivolatile organic compounds by EPA, SW 846 Method 0010 (i.e., the Modified Method 5 sampling train). h...
Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel
2014-01-01
This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure. PMID:25254303
Domínguez, Rocio Berenice; Moreno-Barón, Laura; Muñoz, Roberto; Gutiérrez, Juan Manuel
2014-09-24
This paper describes a new method based on a voltammetric electronic tongue (ET) for the recognition of distinctive features in coffee samples. An ET was directly applied to different samples from the main Mexican coffee regions without any pretreatment before the analysis. The resulting electrochemical information was modeled with two different mathematical tools, namely Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Growing conditions (i.e., organic or non-organic practices and altitude of crops) were considered for a first classification. LDA results showed an average discrimination rate of 88% ± 6.53% while SVM successfully accomplished an overall accuracy of 96.4% ± 3.50% for the same task. A second classification based on geographical origin of samples was carried out. Results showed an overall accuracy of 87.5% ± 7.79% for LDA and a superior performance of 97.5% ± 3.22% for SVM. Given the complexity of coffee samples, the high accuracy percentages achieved by ET coupled with SVM in both classification problems suggested a potential applicability of ET in the assessment of selected coffee features with a simpler and faster methodology along with a null sample pretreatment. In addition, the proposed method can be applied to authentication assessment while improving cost, time and accuracy of the general procedure.
On state-of-charge determination for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Li, Zhe; Huang, Jun; Liaw, Bor Yann; Zhang, Jianbo
2017-04-01
Accurate estimation of state-of-charge (SOC) of a battery through its life remains challenging in battery research. Although improved precisions continue to be reported at times, almost all are based on regression methods empirically, while the accuracy is often not properly addressed. Here, a comprehensive review is set to address such issues, from fundamental principles that are supposed to define SOC to methodologies to estimate SOC for practical use. It covers topics from calibration, regression (including modeling methods) to validation in terms of precision and accuracy. At the end, we intend to answer the following questions: 1) can SOC estimation be self-adaptive without bias? 2) Why Ah-counting is a necessity in almost all battery-model-assisted regression methods? 3) How to establish a consistent framework of coupling in multi-physics battery models? 4) To assess the accuracy in SOC estimation, statistical methods should be employed to analyze factors that contribute to the uncertainty. We hope, through this proper discussion of the principles, accurate SOC estimation can be widely achieved.
Assessment of Protein Side-Chain Conformation Prediction Methods in Different Residue Environments
Peterson, Lenna X.; Kang, Xuejiao; Kihara, Daisuke
2016-01-01
Computational prediction of side-chain conformation is an important component of protein structure prediction. Accurate side-chain prediction is crucial for practical applications of protein structure models that need atomic detailed resolution such as protein and ligand design. We evaluated the accuracy of eight side-chain prediction methods in reproducing the side-chain conformations of experimentally solved structures deposited to the Protein Data Bank. Prediction accuracy was evaluated for a total of four different structural environments (buried, surface, interface, and membrane-spanning) in three different protein types (monomeric, multimeric, and membrane). Overall, the highest accuracy was observed for buried residues in monomeric and multimeric proteins. Notably, side-chains at protein interfaces and membrane-spanning regions were better predicted than surface residues even though the methods did not all use multimeric and membrane proteins for training. Thus, we conclude that the current methods are as practically useful for modeling protein docking interfaces and membrane-spanning regions as for modeling monomers. PMID:24619909
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyata, Y.; Suzuki, T.; Takechi, M.
2015-07-15
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichletmore » and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.« less
How Fit is Your Citizen Science Data?
NASA Astrophysics Data System (ADS)
Fischer, H. A.; Gerber, L. R.; Wentz, E. A.
2017-12-01
Data quality and accuracy is a fundamental concern with utilizing citizen science data. Although many methods can be used to assess quality and accuracy, these methods may not be sufficient to qualify citizen science data for widespread use in scientific research. While Data Fitness For Use (DFFU) does not provide a blanket assessment of data quality, it does assesses the data's ability to be used for a specific application, within a given area (Devillers and Bédard 2007). The STAAq (Spatial, Temporal, Aptness, and Accuracy) assessment was developed to assess the fitness for use of citizen science data, this assessment can be used on a stand alone dataset or be used to compare multiple datasets. The citizen science data used in this assessment was collected by volunteers of the Map of Life- Denali project, which is a tourist-centric citizen science project developed through a partnership with Arizona State University, Map of Life at Yale University, and Denali National Park and Preserve. Volunteers use the offline version of the Map of Life app to record their wildlife, insect, and plant observations in the park. To test the STAAq assessment data from different sources- Map of Life- Denali, Ride Observe and Record, and NPS wildlife surveys- were compared to determined which dataset is most fit for use for a specific research question; What is the recent Grizzly bear distribution in areas of high visitor use in Denali National Park and Preserve? These datasets were compared and ranked according to how well they performed in each of the components of the STAAq assessment. These components include spatial scale, temporal scale, aptness, and application. The Map of Life- Denali data and the ROAR program data were most for use for this research question. The STAAq assessment can be adjusted to assess the fitness for use of a single dataset or being used to compare any number of datasets. This data fitness for use assessment provides a means to assess data fitness instead of data quality for citizen science data.
Discontinuous Galerkin Methods and High-Speed Turbulent Flows
NASA Astrophysics Data System (ADS)
Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter
2014-11-01
Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.
OpenMEEG: opensource software for quasistatic bioelectromagnetics.
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2010-09-06
Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.
Peace, Aaron; Ramsewak, Adesh; Cairns, Andrew; Finlay, Dewar; Guldenring, Daniel; Clifford, Gari; Bond, Raymond
2015-01-01
The 12-lead electrocardiogram (ECG) is a crucial diagnostic tool. However, the ideal method to assess competency in ECG interpretation remains unclear. We sought to evaluate whether keypad response technology provides a rapid, interactive way to assess ECG knowledge. 75 participants were enrolled [32 (43%) Primary Care Physicians, 24 (32%) Hospital Medical Staff and 19 (25%) Nurse Practitioners]. Nineteen ECGs with 4 possible answers were interpreted. Out of 1425 possible decisions 1054 (73.9%) responses were made. Only 570/1425 (40%) of the responses were correct. Diagnostic accuracy varied (0% to 78%, mean 42%±21%) across the entire cohort. Participation was high, (median 83%, IQR 50%-100%). Hospital Medical Staff had significantly higher diagnostic accuracy than nurse practitioners (50±20% vs. 38±19%, p=0.04) and Primary Care Physicians (50±20% vs. 40±21%, p=0.07) although not significant. Interactive voting systems can be rapidly and successfully used to assess ECG interpretation. Further education is necessary to improve diagnostic accuracy. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.
Frampton, Geoff K; Kalita, Neelam; Payne, Liz; Colquitt, Jill; Loveman, Emma
2016-04-01
Natural fluorescence in the eye may be increased or decreased by diseases that affect the retina. Imaging methods based on confocal scanning laser ophthalmoscopy (cSLO) can detect this 'fundus autofluorescence' (FAF) by illuminating the retina using a specific light 'excitation wavelength'. FAF imaging could assist the diagnosis or monitoring of retinal conditions. However, the accuracy of the method for diagnosis or monitoring is unclear. To conduct a systematic review to determine the accuracy of FAF imaging using cSLO for the diagnosis or monitoring of retinal conditions, including monitoring of response to therapy. Electronic bibliographic databases; scrutiny of reference lists of included studies and relevant systematic reviews; and searches of internet pages of relevant organisations, meetings and trial registries. Databases included MEDLINE, EMBASE, The Cochrane Library, Web of Science and the Medion database of diagnostic accuracy studies. Searches covered 1990 to November 2014 and were limited to the English language. References were screened for relevance using prespecified inclusion criteria to capture a broad range of retinal conditions. Two reviewers assessed titles and abstracts independently. Full-text versions of relevant records were retrieved and screened by one reviewer and checked by a second. Data were extracted and critically appraised using the Quality Assessment of Diagnostic Accuracy Studies criteria (QUADAS) for assessing risk of bias in test accuracy studies by one reviewer and checked by a second. At all stages any reviewer disagreement was resolved through discussion or arbitration by a third reviewer. Eight primary research studies have investigated the diagnostic accuracy of FAF imaging in retinal conditions: choroidal neovascularisation (one study), reticular pseudodrusen (three studies), cystoid macular oedema (two studies) and diabetic macular oedema (two studies). Sensitivity of FAF imaging using an excitation wavelength of 488 nm was generally high (range 81-100%), but was lower (55% and 32%) in two studies using longer excitation wavelengths (514 nm and 790 nm, respectively). Specificity ranged from 34% to 100%. However, owing to limitations of the data, none of the studies provide conclusive evidence of the diagnostic accuracy of FAF imaging. No studies on the accuracy of FAF imaging for monitoring the progression of retinal conditions or response to therapy were identified. Owing to study heterogeneity, pooling of diagnostic outcomes in meta-analysis was not conducted. All included studies had high risk of bias. In most studies the patient spectrum was not reflective of those who would present in clinical practice and no studies adequately reported how FAF images were interpreted. Although already in use in clinical practice, it is unclear whether or not FAF imaging is accurate, and whether or not it is applied and interpreted consistently for the diagnosis and/or monitoring of retinal conditions. Well-designed prospective primary research studies, which conform to the paradigm of diagnostic test accuracy assessment, are required to investigate the accuracy of FAF imaging in diagnosis and monitoring of inherited retinal dystrophies, early age-related macular degeneration, geographic atrophy and central serous chorioretinopathy. This study is registered as PROSPERO CRD42014014997. The National Institute for Health Research Health Technology Assessment programme.
A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Moore, Ashley
2005-01-01
The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.
Application of Geodetic Techniques for Antenna Positioning in a Ground Penetrating Radar Method
NASA Astrophysics Data System (ADS)
Mazurkiewicz, Ewelina; Ortyl, Łukasz; Karczewski, Jerzy
2018-03-01
The accuracy of determining the location of detectable subsurface objects is related to the accuracy of the position of georadar traces in a given profile, which in turn depends on the precise assessment of the distance covered by an antenna. During georadar measurements the distance covered by an antenna can be determined with a variety of methods. Recording traces at fixed time intervals is the simplest of them. A method which allows for more precise location of georadar traces is recording them at fixed distance intervals, which can be performed with the use of distance triggers (such as a measuring wheel or a hip chain). The search for methods eliminating these discrepancies can be based on the measurement of spatial coordinates of georadar traces conducted with the use of modern geodetic techniques for 3-D location. These techniques include above all a GNSS satellite system and electronic tachymeters. Application of the above mentioned methods increases the accuracy of space location of georadar traces. The article presents the results of georadar measurements performed with the use of geodetic techniques in the test area of Mydlniki in Krakow. A satellite receiver Leica system 1200 and a electronic tachymeter Leica 1102 TCRA were integrated with the georadar equipment. The accuracy of locating chosen subsurface structures was compared.
Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences
2018-01-01
Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA) is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%), all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal. PMID:29682424
Correa, Katharina; Bangera, Rama; Figueroa, René; Lhorente, Jean P; Yáñez, José M
2017-01-31
Sea lice infestations caused by Caligus rogercresseyi are a main concern to the salmon farming industry due to associated economic losses. Resistance to this parasite was shown to have low to moderate genetic variation and its genetic architecture was suggested to be polygenic. The aim of this study was to compare accuracies of breeding value predictions obtained with pedigree-based best linear unbiased prediction (P-BLUP) methodology against different genomic prediction approaches: genomic BLUP (G-BLUP), Bayesian Lasso, and Bayes C. To achieve this, 2404 individuals from 118 families were measured for C. rogercresseyi count after a challenge and genotyped using 37 K single nucleotide polymorphisms. Accuracies were assessed using fivefold cross-validation and SNP densities of 0.5, 1, 5, 10, 25 and 37 K. Accuracy of genomic predictions increased with increasing SNP density and was higher than pedigree-based BLUP predictions by up to 22%. Both Bayesian and G-BLUP methods can predict breeding values with higher accuracies than pedigree-based BLUP, however, G-BLUP may be the preferred method because of reduced computation time and ease of implementation. A relatively low marker density (i.e. 10 K) is sufficient for maximal increase in accuracy when using G-BLUP or Bayesian methods for genomic prediction of C. rogercresseyi resistance in Atlantic salmon.
Tahmasian, Masoud; Jamalabadi, Hamidreza; Abedini, Mina; Ghadami, Mohammad R; Sepehry, Amir A; Knight, David C; Khazaie, Habibolah
2017-05-22
Sleep disturbance is common in chronic post-traumatic stress disorder (PTSD). However, prior work has demonstrated that there are inconsistencies between subjective and objective assessments of sleep disturbance in PTSD. Therefore, we investigated whether subjective or objective sleep assessment has greater clinical utility to differentiate PTSD patients from healthy subjects. Further, we evaluated whether the combination of subjective and objective methods improves the accuracy of classification into patient versus healthy groups, which has important diagnostic implications. We recruited 32 chronic war-induced PTSD patients and 32 age- and gender-matched healthy subjects to participate in this study. Subjective (i.e. from three self-reported sleep questionnaires) and objective sleep-related data (i.e. from actigraphy scores) were collected from each participant. Subjective, objective, and combined (subjective and objective) sleep data were then analyzed using support vector machine classification. The classification accuracy, sensitivity, and specificity for subjective variables were 89.2%, 89.3%, and 89%, respectively. The classification accuracy, sensitivity, and specificity for objective variables were 65%, 62.3%, and 67.8%, respectively. The classification accuracy, sensitivity, and specificity for the aggregate variables (combination of subjective and objective variables) were 91.6%, 93.0%, and 90.3%, respectively. Our findings indicate that classification accuracy using subjective measurements is superior to objective measurements and the combination of both assessments appears to improve the classification accuracy for differentiating PTSD patients from healthy individuals. Copyright © 2017 Elsevier B.V. All rights reserved.
Standardized assessment of infrared thermographic fever screening system performance
NASA Astrophysics Data System (ADS)
Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng
2017-03-01
Thermal modalities represent the only currently viable mass fever screening approach for outbreaks of infectious disease pandemics such as Ebola and SARS. Non-contact infrared thermometers (NCITs) and infrared thermographs (IRTs) have been previously used for mass fever screening in transportation hubs such as airports to reduce the spread of disease. While NCITs remain a more popular choice for fever screening in the field and at fixed locations, there has been increasing evidence in the literature that IRTs can provide greater accuracy in estimating core body temperature if appropriate measurement practices are applied - including the use of technically suitable thermographs. Therefore, the purpose of this study was to develop a battery of evaluation test methods for standardized, objective and quantitative assessment of thermograph performance characteristics critical to assessing suitability for clinical use. These factors include stability, drift, uniformity, minimum resolvable temperature difference, and accuracy. Two commercial IRT models were characterized. An external temperature reference source with high temperature accuracy was utilized as part of the screening thermograph. Results showed that both IRTs are relatively accurate and stable (<1% error of reading with stability of +/-0.05°C). Overall, results of this study may facilitate development of standardized consensus test methods to enable consistent and accurate use of IRTs for fever screening.
Compact dry chemistry instruments.
Terashima, K; Tatsumi, N
1999-01-01
Compact dry chemistry instruments are designed for use in point-of-care-testing (POCT). These instruments have a number of advantages, including light weight, compactness, ease of operation, and the ability to provide accurate results in a short time with a very small sample volume. On the other hand, reagent costs are high compared to liquid method. Moreover, differences in accuracy have been found between dry chemistry and the liquid method in external quality assessment scheme. This report examines reagent costs and shows how the total running costs associated with dry chemistry are actually lower than those associated with the liquid method. This report also describes methods for minimizing differences in accuracy between dry chemistry and the liquid method. Use of these measures is expected to increase the effectiveness of compact dry chemistry instruments in POCT applications.
PconsD: ultra rapid, accurate model quality assessment for protein structure prediction.
Skwark, Marcin J; Elofsson, Arne
2013-07-15
Clustering methods are often needed for accurately assessing the quality of modeled protein structures. Recent blind evaluation of quality assessment methods in CASP10 showed that there is little difference between many different methods as far as ranking models and selecting best model are concerned. When comparing many models, the computational cost of the model comparison can become significant. Here, we present PconsD, a fast, stream-computing method for distance-driven model quality assessment that runs on consumer hardware. PconsD is at least one order of magnitude faster than other methods of comparable accuracy. The source code for PconsD is freely available at http://d.pcons.net/. Supplementary benchmarking data are also available there. arne@bioinfo.se Supplementary data are available at Bioinformatics online.
Benchmarking Relatedness Inference Methods with Genome-Wide Data from Thousands of Relatives.
Ramstetter, Monica D; Dyer, Thomas D; Lehman, Donna M; Curran, Joanne E; Duggirala, Ravindranath; Blangero, John; Mezey, Jason G; Williams, Amy L
2017-09-01
Inferring relatedness from genomic data is an essential component of genetic association studies, population genetics, forensics, and genealogy. While numerous methods exist for inferring relatedness, thorough evaluation of these approaches in real data has been lacking. Here, we report an assessment of 12 state-of-the-art pairwise relatedness inference methods using a data set with 2485 individuals contained in several large pedigrees that span up to six generations. We find that all methods have high accuracy (92-99%) when detecting first- and second-degree relationships, but their accuracy dwindles to <43% for seventh-degree relationships. However, most identical by descent (IBD) segment-based methods inferred seventh-degree relatives correct to within one relatedness degree for >76% of relative pairs. Overall, the most accurate methods are Estimation of Recent Shared Ancestry (ERSA) and approaches that compute total IBD sharing using the output from GERMLINE and Refined IBD to infer relatedness. Combining information from the most accurate methods provides little accuracy improvement, indicating that novel approaches, such as new methods that leverage relatedness signals from multiple samples, are needed to achieve a sizeable jump in performance. Copyright © 2017 Ramstetter et al.
Aerial photography flight quality assessment with GPS/INS and DEM data
NASA Astrophysics Data System (ADS)
Zhao, Haitao; Zhang, Bing; Shang, Jiali; Liu, Jiangui; Li, Dong; Chen, Yanyan; Zuo, Zhengli; Chen, Zhengchao
2018-01-01
The flight altitude, ground coverage, photo overlap, and other acquisition specifications of an aerial photography flight mission directly affect the quality and accuracy of the subsequent mapping tasks. To ensure smooth post-flight data processing and fulfill the pre-defined mapping accuracy, flight quality assessments should be carried out in time. This paper presents a novel and rigorous approach for flight quality evaluation of frame cameras with GPS/INS data and DEM, using geometric calculation rather than image analysis as in the conventional methods. This new approach is based mainly on the collinearity equations, in which the accuracy of a set of flight quality indicators is derived through a rigorous error propagation model and validated with scenario data. Theoretical analysis and practical flight test of an aerial photography mission using an UltraCamXp camera showed that the calculated photo overlap is accurate enough for flight quality assessment of 5 cm ground sample distance image, using the SRTMGL3 DEM and the POSAV510 GPS/INS data. An even better overlap accuracy could be achieved for coarser-resolution aerial photography. With this new approach, the flight quality evaluation can be conducted on site right after landing, providing accurate and timely information for decision making.
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Assessing Clinical Significance: Does it Matter which Method we Use?
ERIC Educational Resources Information Center
Atkins, David C.; Bedics, Jamie D.; Mcglinchey, Joseph B.; Beauchaine, Theodore P.
2005-01-01
Measures of clinical significance are frequently used to evaluate client change during therapy. Several alternatives to the original method devised by N. S. Jacobson, W. C. Follette, & D. Revenstorf (1984) have been proposed, each purporting to increase accuracy. However, researchers have had little systematic guidance in choosing among…
Methods are needed improve the timeliness and accuracy of recreational water‐quality assessments. Traditional culture methods require 18–24 h to obtain results and may not reflect current conditions. Predictive models, based on environmental and water quality variables, have been...
The reliability of the pass/fail decision for assessments comprised of multiple components
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements. PMID:26483855
NASA Astrophysics Data System (ADS)
Søe-Knudsen, Alf; Sorokin, Sergey
2011-06-01
This rapid communication is concerned with justification of the 'rule of thumb', which is well known to the community of users of the finite element (FE) method in dynamics, for the accuracy assessment of the wave finite element (WFE) method. An explicit formula linking the size of a window in the dispersion diagram, where the WFE method is trustworthy, with the coarseness of a FE mesh employed is derived. It is obtained by the comparison of the exact Pochhammer-Chree solution for an elastic rod having the circular cross-section with its WFE approximations. It is shown that the WFE power flow predictions are also valid within this window.
Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions
Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele
2016-01-01
To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration. Given the ease-of-use and cost benefits of test strips, we recommend further development of test strips robust to pH variation and appropriate for Ebola-relevant chlorine solution concentrations. PMID:27243817
Ashfaq, Maria; Sial, Ali Akber; Bushra, Rabia; Rehman, Atta-Ur; Baig, Mirza Tasawur; Huma, Ambreen; Ahmed, Maryam
2018-01-01
Spectrophotometric technique is considered to be the simplest and operator friendly among other available analytical methods for pharmaceutical analysis. The objective of the study was to develop a precise, accurate and rapid UV-spectrophotometric method for the estimation of chlorpheniramine maleate (CPM) in pure and solid pharmaceutical formulation. Drug absorption was measured in various solvent systems including 0.1N HCl (pH 1.2), acetate buffer (pH 4.5), phosphate buffer (pH 6.8) and distil water (pH 7.0). Method validation was performed as per official guidelines of ICH, 2005. High drug absorption was observed in 0.1N HCl medium with λ max of 261nm. The drug showed the good linearity from 20 to 60μg/mL solution concentration with the correlation coefficient linear regression equation Y= 0.1853 X + 0.1098 presenting R 2 value of 0.9998. The method accuracy was evaluated by the percent drug recovery, presents more than 99% drug recovery at three different levels assessed. The % RSD value <1 was computed for inter and intraday analysis indicating the high accuracy and precision of the developed technique. The developed method is robust because it shows no any significant variation in with minute changes. The LOD and LOQ values were assessed to be 2.2μg/mL and 6.6μg/mL respectively. The investigated method proved its sensitivity, precision and accuracy hence could be successfully used to estimate the CPM content in bulk and pharmaceutical matrix tablets.
Lee, Youn Joo; Lim, Yeon Soo; Lim, Hyun Wook; Yoo, Won Jong; Choi, Byung Gil; Kim, Bum Soo
2014-10-01
There are very few reports assessing in-stent restenosis (ISR) after vertebral artery ostium (VAO) stents using multidetector computed tomography (MDCT). To compare the diagnostic accuracy of computed tomography angiography (CTA) using 64-slice MDCT with digital subtraction angiography (DSA) for detection of significant ISR after VAO stenting. The study evaluated 57 VAO stents in 57 patients (39 men, 18 women; mean age 64 years [range, 48-90 years]). All stents were scanned with a 64-slice MDCT scanner. Three sets of images were reconstructed with three different convolution kernels. Two observers who were blinded to the results of DSA assessed the diagnostic accuracy of CTA for detecting significant ISR (≥50% diameter narrowing) of VAO stents in comparison with DSA as the reference standard. The sensitivity, specificity, positive and negative predictive values, and accuracy were calculated. Of the 57 stents, 46 (81%) were assessable using CTA, while 11 (19%) were not. No stents with diameters ≤2.75 mm were assessable. DSA revealed 13 cases of significant ISR in all stents. The respective sensitivity, specificity, positive and negative predictive values, and accuracy were 92%, 82%, 60%, 97%, and 84% for all stents. On excluding the 11 non-assessable stents, the respective values were 88%, 95%, 78%, 97%, and 93%. Of the 46 CTA assessable stents, eight significant ISRs were diagnosed on DSA. Seven of eight patients with significant ISR by DSA were diagnosed correctly with CTA. The area under the receiver-operating characteristic curve (AUC) was 0.87 for all stents and 0.91 for assessable stents, indicating good to excellent agreement between CTA and DSA for detecting significant ISR after VAO stenting. Sixty-four-slice MDCT is a promising non-invasive method of assessing stent patency and can exclude significant ISR with high diagnostic values after VAO stenting. © The Foundation Acta Radiologica 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine
NASA Astrophysics Data System (ADS)
Spodniak, Miroslav; Klimko, Marek; Hocko, Marián; Žitek, Pavel
This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.
Oloumi, Faraz; Rangayyan, Rangaraj M.; Ells, Anna L.
2016-01-01
Abstract. Retinopathy of prematurity (ROP), a disorder of the retina occurring in preterm infants, is the leading cause of preventable childhood blindness. An active phase of ROP that requires treatment is associated with the presence of plus disease, which is diagnosed clinically in a qualitative manner by visual assessment of the existence of a certain level of increase in the thickness and tortuosity of retinal vessels. The present study performs computer-aided diagnosis (CAD) of plus disease via quantitative measurement of tortuosity in retinal fundus images of preterm infants. Digital image processing techniques were developed for the detection of retinal vessels and measurement of their tortuosity. The total lengths of abnormally tortuous vessels in each quadrant and the entire image were then computed. A minimum-length diagnostic-decision-making criterion was developed to assess the diagnostic sensitivity and specificity of the values obtained. The area (Az) under the receiver operating characteristic curve was used to assess the overall diagnostic accuracy of the methods. Using a set of 19 retinal fundus images of preterm infants with plus disease and 91 without plus disease, the proposed methods provided an overall diagnostic accuracy of Az=0.98. Using the total length of all abnormally tortuous vessel segments in an image, our techniques are capable of CAD of plus disease with high accuracy without the need for manual selection of vessels to analyze. The proposed methods may be used in a clinical or teleophthalmological setting. PMID:28018938
Assessment of shrimp farming impact on groundwater quality using analytical hierarchy process
NASA Astrophysics Data System (ADS)
Anggie, Bernadietta; Subiyanto, Arief, Ulfah Mediaty; Djuniadi
2018-03-01
Improved shrimp farming affects the groundwater quality conditions. Assessment of shrimp farming impact on groundwater quality conventionally has less accuracy. This paper presents the implementation of Analytical Hierarchy Process (AHP) method for assessing shrimp farming impact on groundwater quality. The data used is the impact data of shrimp farming in one of the regions in Indonesia from 2006-2016. Criteria used in this study were 8 criteria and divided into 49 sub-criteria. The weighting by AHP performed to determine the importance level of criteria and sub-criteria. Final priority class of shrimp farming impact were obtained from the calculation of criteria's and sub-criteria's weights. The validation was done by comparing priority class of shrimp farming impact and water quality conditions. The result show that 50% of the total area was moderate priority class, 37% was low priority class and 13% was high priority class. From the validation result impact assessment for shrimp farming has been high accuracy to the groundwater quality conditions. This study shows that assessment based on AHP has a higher accuracy to shrimp farming impact and can be used as the basic fisheries planning to deal with impacts that have been generated.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-04-26
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-01-01
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668
NASA Astrophysics Data System (ADS)
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
2017-06-21
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Liqiang; Gopalan, Balaji; Benyahia, Sofiane
Several discrete particle methods exist in the open literature to simulate fluidized bed systems, such as discrete element method (DEM), time driven hard sphere (TDHS), coarse-grained particle method (CGPM), coarse grained hard sphere (CGHS), and multi-phase particle-in-cell (MP-PIC). These different approaches usually solve the fluid phase in a Eulerian fixed frame of reference and the particle phase using the Lagrangian method. The first difference between these models lies in tracking either real particles or lumped parcels. The second difference is in the treatment of particle-particle interactions: by calculating collision forces (DEM and CGPM), using momentum conservation laws (TDHS and CGHS),more » or based on particle stress model (MP-PIC). These major model differences lead to a wide range of results accuracy and computation speed. However, these models have never been compared directly using the same experimental dataset. In this research, a small-scale fluidized bed is simulated with these methods using the same open-source code MFIX. The results indicate that modeling the particle-particle collision by TDHS increases the computation speed while maintaining good accuracy. Also, lumping few particles in a parcel increases the computation speed with little loss in accuracy. However, modeling particle-particle interactions with solids stress leads to a big loss in accuracy with a little increase in computation speed. The MP-PIC method predicts an unphysical particle-particle overlap, which results in incorrect voidage distribution and incorrect overall bed hydrodynamics. Based on this study, we recommend using the CGHS method for fluidized bed simulations due to its computational speed that rivals that of MPPIC while maintaining a much better accuracy.« less
Automatic Coding of Short Text Responses via Clustering in Educational Assessment
ERIC Educational Resources Information Center
Zehner, Fabian; Sälzer, Christine; Goldhammer, Frank
2016-01-01
Automatic coding of short text responses opens new doors in assessment. We implemented and integrated baseline methods of natural language processing and statistical modelling by means of software components that are available under open licenses. The accuracy of automatic text coding is demonstrated by using data collected in the "Programme…
Perez-Cruz, Pedro E.; dos Santos, Renata; Silva, Thiago Buosi; Crovador, Camila Souza; Nascimento, Maria Salete de Angelis; Hall, Stacy; Fajardo, Julieta; Bruera, Eduardo; Hui, David
2014-01-01
Context Survival prognostication is important during end-of-life. The accuracy of clinician prediction of survival (CPS) over time has not been well characterized. Objectives To examine changes in prognostication accuracy during the last 14 days of life in a cohort of patients with advanced cancer admitted to two acute palliative care units and to compare the accuracy between the temporal and probabilistic approaches. Methods Physicians and nurses prognosticated survival daily for cancer patients in two hospitals until death/discharge using two prognostic approaches: temporal and probabilistic. We assessed accuracy for each method daily during the last 14 days of life comparing accuracy at day −14 (baseline) with accuracy at each time point using a test of proportions. Results 6718 temporal and 6621 probabilistic estimations were provided by physicians and nurses for 311 patients, respectively. Median (interquartile range) survival was 8 (4, 20) days. Temporal CPS had low accuracy (10–40%) and did not change over time. In contrast, probabilistic CPS was significantly more accurate (p<.05 at each time point) but decreased close to death. Conclusion Probabilistic CPS was consistently more accurate than temporal CPS over the last 14 days of life; however, its accuracy decreased as patients approached death. Our findings suggest that better tools to predict impending death are necessary. PMID:24746583
Assessing the accuracy of different simplified frictional rolling contact algorithms
NASA Astrophysics Data System (ADS)
Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.
2012-01-01
This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.
Cervical vertebral maturation as a biologic indicator of skeletal maturity.
Santiago, Rodrigo César; de Miranda Costa, Luiz Felipe; Vitral, Robert Willer Farinazzo; Fraga, Marcelo Reis; Bolognese, Ana Maria; Maia, Lucianne Cople
2012-11-01
To identify and review the literature regarding the reliability of cervical vertebrae maturation (CVM) staging to predict the pubertal spurt. The selection criteria included cross-sectional and longitudinal descriptive studies in humans that evaluated qualitatively or quantitatively the accuracy and reproducibility of the CVM method on lateral cephalometric radiographs, as well as the correlation with a standard method established by hand-wrist radiographs. The searches retrieved 343 unique citations. Twenty-three studies met the inclusion criteria. Six articles had moderate to high scores, while 17 of 23 had low scores. Analysis also showed a moderate to high statistically significant correlation between CVM and hand-wrist maturation methods. There was a moderate to high reproducibility of the CVM method, and only one specific study investigated the accuracy of the CVM index in detecting peak pubertal growth. This systematic review has shown that the studies on CVM method for radiographic assessment of skeletal maturation stages suffer from serious methodological failures. Better-designed studies with adequate accuracy, reproducibility, and correlation analysis, including studies with appropriate sensitivity-specificity analysis, should be performed.
Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.
Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel
2017-08-18
Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.
A Flexible Analysis Tool for the Quantitative Acoustic Assessment of Infant Cry
Reggiannini, Brian; Sheinkopf, Stephen J.; Silverman, Harvey F.; Li, Xiaoxue; Lester, Barry M.
2015-01-01
Purpose In this article, the authors describe and validate the performance of a modern acoustic analyzer specifically designed for infant cry analysis. Method Utilizing known algorithms, the authors developed a method to extract acoustic parameters describing infant cries from standard digital audio files. They used a frame rate of 25 ms with a frame advance of 12.5 ms. Cepstral-based acoustic analysis proceeded in 2 phases, computing frame-level data and then organizing and summarizing this information within cry utterances. Using signal detection methods, the authors evaluated the accuracy of the automated system to determine voicing and to detect fundamental frequency (F0) as compared to voiced segments and pitch periods manually coded from spectrogram displays. Results The system detected F0 with 88% to 95% accuracy, depending on tolerances set at 10 to 20 Hz. Receiver operating characteristic analyses demonstrated very high accuracy at detecting voicing characteristics in the cry samples. Conclusions This article describes an automated infant cry analyzer with high accuracy to detect important acoustic features of cry. A unique and important aspect of this work is the rigorous testing of the system’s accuracy as compared to ground-truth manual coding. The resulting system has implications for basic and applied research on infant cry development. PMID:23785178
Dentalmaps: Automatic Dental Delineation for Radiotherapy Planning in Head-and-Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thariat, Juliette, E-mail: jthariat@hotmail.com; Ramus, Liliane; INRIA
Purpose: To propose an automatic atlas-based segmentation framework of the dental structures, called Dentalmaps, and to assess its accuracy and relevance to guide dental care in the context of intensity-modulated radiotherapy. Methods and Materials: A multi-atlas-based segmentation, less sensitive to artifacts than previously published head-and-neck segmentation methods, was used. The manual segmentations of a 21-patient database were first deformed onto the query using nonlinear registrations with the training images and then fused to estimate the consensus segmentation of the query. Results: The framework was evaluated with a leave-one-out protocol. The maximum doses estimated using manual contours were considered as groundmore » truth and compared with the maximum doses estimated using automatic contours. The dose estimation error was within 2-Gy accuracy in 75% of cases (with a median of 0.9 Gy), whereas it was within 2-Gy accuracy in 30% of cases only with the visual estimation method without any contour, which is the routine practice procedure. Conclusions: Dose estimates using this framework were more accurate than visual estimates without dental contour. Dentalmaps represents a useful documentation and communication tool between radiation oncologists and dentists in routine practice. Prospective multicenter assessment is underway on patients extrinsic to the database.« less
Are general surgeons able to accurately self-assess their level of technical skills?
Rizan, C; Ansell, J; Tilston, T W; Warren, N; Torkington, J
2015-11-01
Self-assessment is a way of improving technical capabilities without the need for trainer feedback. It can identify areas for improvement and promote professional medical development. The aim of this review was to identify whether self-assessment is an accurate form of technical skills appraisal in general surgery. The PubMed, MEDLINE(®), Embase(™) and Cochrane databases were searched for studies assessing the reliability of self-assessment of technical skills in general surgery. For each study, we recorded the skills assessed and the evaluation methods used. Common endpoints between studies were compared to provide recommendations based on the levels of evidence. Twelve studies met the inclusion criteria from 22,292 initial papers. There was no level 1 evidence published. All papers compared the correlation between self-appraisal versus an expert score but differed in the technical skills assessment and the evaluation tools used. The accuracy of self-assessment improved with increasing experience (level 2 recommendation), age (level 3 recommendation) and the use of video playback (level 3 recommendation). Accuracy was reduced by stressful learning environments (level 2 recommendation), lack of familiarity with assessment tools (level 3 recommendation) and in advanced surgical procedures (level 3 recommendation). Evidence exists to support the reliability of self-assessment of technical skills in general surgery. Several variables have been shown to affect the accuracy of self-assessment of technical skills. Future work should focus on evaluating the reliability of self-assessment during live operating procedures.
Validation of a Spectral Method for Quantitative Measurement of Color in Protein Drug Solutions.
Yin, Jian; Swartz, Trevor E; Zhang, Jian; Patapoff, Thomas W; Chen, Bartolo; Marhoul, Joseph; Shih, Norman; Kabakoff, Bruce; Rahimi, Kimia
2016-01-01
A quantitative spectral method has been developed to precisely measure the color of protein solutions. In this method, a spectrophotometer is utilized for capturing the visible absorption spectrum of a protein solution, which can then be converted to color values (L*a*b*) that represent human perception of color in a quantitative three-dimensional space. These quantitative values (L*a*b*) allow for calculating the best match of a sample's color to a European Pharmacopoeia reference color solution. In order to qualify this instrument and assay for use in clinical quality control, a technical assessment was conducted to evaluate the assay suitability and precision. Setting acceptance criteria for this study required development and implementation of a unique statistical method for assessing precision in 3-dimensional space. Different instruments, cuvettes, protein solutions, and analysts were compared in this study. The instrument accuracy, repeatability, and assay precision were determined. The instrument and assay are found suitable for use in assessing color of drug substances and drug products and is comparable to the current European Pharmacopoeia visual assessment method. In the biotechnology industry, a visual assessment is the most commonly used method for color characterization, batch release, and stability testing of liquid protein drug solutions. Using this method, an analyst visually determines the color of the sample by choosing the closest match to a standard color series. This visual method can be subjective because it requires an analyst to make a judgment of the best match of color of the sample to the standard color series, and it does not capture data on hue and chroma that would allow for improved product characterization and the ability to detect subtle differences between samples. To overcome these challenges, we developed a quantitative spectral method for color determination that greatly reduces the variability in measuring color and allows for a more precise understanding of color differences. In this study, we established a statistical method for assessing precision in 3-dimensional space and demonstrated that the quantitative spectral method is comparable with respect to precision and accuracy to the current European Pharmacopoeia visual assessment method. © PDA, Inc. 2016.
van Ijsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Reiber, J H C; Kaptein, B L
2011-10-13
Accurate in vivo measurements methods of wear in total knee arthroplasty are required for a timely detection of excessive wear and to assess new implant designs. Component separation measurements based on model-based Roentgen stereophotogrammetric analysis (RSA), in which 3-dimensional reconstruction methods are used, have shown promising results, yet the robustness of these measurements is unknown. In this study, the accuracy and robustness of this measurement for clinical usage was assessed. The validation experiments were conducted in an RSA setup with a phantom setup of a knee in a vertical orientation. 72 RSA images were created using different variables for knee orientations, two prosthesis types (fixed-bearing Duracon knee and fixed-bearing Triathlon knee) and accuracies of the reconstruction models. The measurement error was determined for absolute and relative measurements and the effect of knee positioning and true seperation distance was determined. The measurement method overestimated the separation distance with 0.1mm on average. The precision of the method was 0.10mm (2*SD) for the Duracon prosthesis and 0.20mm for the Triathlon prosthesis. A slight difference in error was found between the measurements with 0° and 10° anterior tilt. (difference=0.08mm, p=0.04). The accuracy of 0.1mm and precision of 0.2mm can be achieved for linear wear measurements based on model-based RSA, which is more than adequate for clinical applications. The measurement is robust in clinical settings. Although anterior tilt seems to influence the measurement, the size of this influence is low and clinically irrelevant. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara
2018-04-06
The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.
Kaynar, Ozgur; Karapinar, Tolga; Hayirli, Armagan; Baydar, Ersoy
2015-12-01
Data on accuracy and precision of the Lactate Scout point-of-care (POC) analyzer in ovine medicine are lacking. The purpose of the study was to assess the reliability of the Lactate Scout in sheep. Fifty-seven sheep at varying ages with various diseases were included. Blood lactate concentration in samples collected from the jugular vein was measured immediately on the Lactate Scout. Plasma L-lactate concentration was measured by the Cobas autoanalyzer as the reference method. Data were subjected to Student's t-test, Passing-Bablok regression, and Bland-Altman plot analyses for comparison and assessment of accuracy, precision, and reliability. Plasma l-lactate concentration was consistently lower than blood L-lactate concentration (3.06 ± 0.24 vs 3.3 ± 0.3 mmol/L, P < .0001). There was a positive correlation between plasma and blood L-lactate concentrations (r = .98, P < .0001). The Lactate Scout had 99% accuracy and 98% precision with the reference method. Blood (Y) and plasma (X) L-lactate concentrations were fitted to Y = 0.28 + 1.00 · X, with a residual standard deviation of 0.31 and a negligible deviation from the identity line (P = .93). The bias was fitted to Y = 0.10 + 0.05 · X, with Sy.x of 0.44 (P < .07). The Lactate Scout has high accuracy and precision, with a negligible bias. It is a reliable POC analyzer to assess L-lactate concentration in ovine medicine. © 2015 American Society for Veterinary Clinical Pathology.
Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef
2016-01-01
We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SD(SE)): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SD(SE): 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice.
Variability of Diabetes Alert Dog Accuracy in a Real-World Setting
Gonder-Frederick, Linda A.; Grabman, Jesse H.; Shepard, Jaclyn A.; Tripathi, Anand V.; Ducar, Dallas M.; McElgunn, Zachary R.
2017-01-01
Background: Diabetes alert dogs (DADs) are growing in popularity as an alternative method of glucose monitoring for individuals with type 1 diabetes (T1D). Only a few empirical studies have assessed DAD accuracy, with inconsistent results. The present study examined DAD accuracy and variability in performance in real-world conditions using a convenience sample of owner-report diaries. Method: Eighteen DAD owners (44.4% female; 77.8% youth) with T1D completed diaries of DAD alerts during the first year after placement. Diary entries included daily BG readings and DAD alerts. For each DAD, percentage hits (alert with BG ≤ 5.0 or ≥ 11.1 mmol/L; ≤90 or ≥200 mg/dl), percentage misses (no alert with BG out of range), and percentage false alarms (alert with BG in range) were computed. Sensitivity, specificity, positive likelihood ratio (PLR), and true positive rates were also calculated. Results: Overall comparison of DAD Hits to Misses yielded significantly more Hits for both low and high BG. Total sensitivity was 57.0%, with increased sensitivity to low BG (59.2%) compared to high BG (56.1%). Total specificity was 49.3% and PLR = 1.12. However, high variability in accuracy was observed across DADs, with low BG sensitivity ranging from 33% to 100%. Number of DADs achieving ≥ 60%, 65% and 70% true positive rates was 71%, 50% and 44%, respectively. Conclusions: DADs may be able to detect out-of-range BG, but variability across DADs is evident. Larger trials are needed to further assess DAD accuracy and to identify factors influencing the complexity of DAD accuracy in BG detection. PMID:28627305
Zhu, Yu; Liu, Zhao-Gang; Jin, Guang-Ze
2013-05-01
To integrate the health assessment results of individual trees into the health assessment of subcompartment (or stand) scale could improve the accuracy of subcompartment (or stand) scale health assessment, and realize the coupling process between the individual tree scale and the subcompartment (or stand) scale, providing a theoretical basis for the realization of forest health management. Taking the natural Larix gmelinii forest in Great Xing' an Mountains as the object, a health assessment indicators system of individual trees was established, which included root state, canopy defoliation degree, crown transparency, crown overlap, crown dieback ratio, live crown ratio, crown skewness, and vertical competition index. The principal component analysis (PCA) was employed to eliminate the correlations, the entropy value method was adopted to confirm the weight of each indicator, and the health status of individual L. gmelinii was assessed by fuzzy synthetic evaluation (FSE) method. Based on the health assessment results, a prediction model of the individual tree health was established by discriminant analysis (DA) method. The results showed that the trees in sub-healthy gradation were up to 36.7%, and those in healthy gradation only reached 12.9%. The proportion of the trees in unhealthy gradation exceeded that of the trees in healthy gradation, occupying 21.1% of the total. The prediction accuracy of the established model was 86.3%. More rational and effective management measures should be taken to improve the tree health grade.
Assessing the dosimetric and geometric accuracy of stereotactic radiosurgery
NASA Astrophysics Data System (ADS)
Dimitriadis, Alexis
Stereotactic radiosurgery (SRS) is a non-invasive treatment predominantly used for the management of malignant and benign brain tumours. The treatment can be delivered by various platforms in a single fraction where a high dose of radiation is delivered to the target whilst the surrounding healthy tissue is spared. This requires a high degree of accuracy in terms of the dose level delivered but also in terms of geometric precision. The purpose of this work was to identify the variations of SRS practice in the UK and develop a novel method compatible with all practices, capable of assessing the accuracy of delivery. The motivation behind this effort was to contribute to safety in SRS delivery, provide confidence through a quality assurance audit and form a basis to support standardisation in SRS. A national survey was performed to investigate SRS practices in the UK and to help guide the methodology of this thesis. This resulted to the development of a method for an end-to-end audit of SRS. This was based on an anthropomorphic head phantom with a medium sized target located centrally in the brain, in close proximity to the brainstem. This realistic patient scenario was presented to all 26 radiosurgery centres in the UK who were asked to treat it with SRS. The dose delivered was assessed using two novel commercially available radiation detectors, a plastic scintillator and radiochromic film. These detectors were characterised for measuring the dose delivered in SRS. Another established dosimetry system, alanine, was also used alongside these detectors to assess the accuracy of each delivery. The results allowed the assessment of SRS practices in the UK and the comparison of all centres that participated in the audit. The results were also used to evaluate the performance of the dosimeters used for the purposes of quality assurance measurements and audit.
Akiyama, Yoshihiro B; Iseri, Erina; Kataoka, Tomoya; Tanaka, Makiko; Katsukoshi, Kiyonori; Moki, Hirotada; Naito, Ryoji; Hem, Ramrav; Okada, Tomonari
2017-02-15
In the present study, we determined the common morphological characteristics of the feces of Mytilus galloprovincialis to develop a method for visually discriminating the feces of this mussel in deposited materials. This method can be used to assess the effect of mussel feces on benthic environments. The accuracy of visual morphology-based discrimination of mussel feces in deposited materials was confirmed by DNA analysis. Eighty-nine percent of mussel feces shared five common morphological characteristics. Of the 372 animal species investigated, only four species shared all five of these characteristics. More than 96% of the samples were visually identified as M. galloprovincialis feces on the basis of morphology of the particles containing the appropriate mitochondrial DNA. These results suggest that mussel feces can be discriminated with high accuracy on the basis of their morphological characteristics. Thus, our method can be used to quantitatively assess the effect of mussel feces on local benthic environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
France, Logan K; Vermillion, Meghan S; Garrett, Caroline M
2018-01-01
Blood pressure is a critical parameter for evaluating cardiovascular health, assessing effects of drugs and procedures, monitoring physiologic status during anesthesia, and making clinical decisions. The placement of an arterial catheter is the most direct and accurate method for measuring blood pressure; however, this approach is invasive and of limited use during brief sedated examinations. The objective of this study was to determine which method of indirect blood pressure monitoring was most accurate compared with measurement by direct arterial catheterization. In addition, we sought to determine the relative accuracy of each indirect method (compared with direct arterial measurement) at a given body location and to assess whether the accuracy of each indirect method was dependent on body location. We compared direct blood pressure measurements by means of catheterization of the saphenous artery with oscillometric and ultrasonic Doppler flow detection measurements at 3 body locations (forearm, distal leg, and tail base) in 16 anesthetized, male rhesus macaques. The results indicate that oscillometry at the forearm is the best indirect method and location for accurately and consistently measuring blood pressure in healthy male rhesus macaques.
NASA Astrophysics Data System (ADS)
Brahmi, Djamel; Cassoux, Nathalie; Serruys, Camille; Giron, Alain; Lehoang, Phuc; Fertil, Bernard
1999-05-01
To support ophthalmologists in their daily routine and enable the quantitative assessment of progression of Cytomegalovirus infection as observed on series of retinal angiograms, a methodology allowing an accurate comparison of retinal borders has been developed. In order to evaluate accuracy of borders, ophthalmologists have been asked to repeatedly outline boundaries between infected and noninfected areas. As a matter of fact, accuracy of drawing relies on local features such as contrast, quality of image, background..., all factors which make the boundaries more or less perceptible from one part of an image to another. In order to directly estimate accuracy of retinal border from image analysis, an artificial neural network (a succession of unsupervised and supervised neural networks) has been designed to correlate accuracy of drawing (as calculated form ophthalmologists' hand-outlines) with local features of the underlying image. Our method has been applied to the quantification of CMV retinitis. It is shown that accuracy of border is properly predicted and characterized by a confident envelope that allows, after a registration phase based on fixed landmarks such as vessel forks, to accurately assess the evolution of CMV infection.
Yang, Xiaoyan; Chen, Longgao; Li, Yingkui; Xi, Wenjia; Chen, Longqian
2015-07-01
Land use/land cover (LULC) inventory provides an important dataset in regional planning and environmental assessment. To efficiently obtain the LULC inventory, we compared the LULC classifications based on single satellite imagery with a rule-based classification based on multi-seasonal imagery in Lianyungang City, a coastal city in China, using CBERS-02 (the 2nd China-Brazil Environmental Resource Satellites) images. The overall accuracies of the classification based on single imagery are 78.9, 82.8, and 82.0% in winter, early summer, and autumn, respectively. The rule-based classification improves the accuracy to 87.9% (kappa 0.85), suggesting that combining multi-seasonal images can considerably improve the classification accuracy over any single image-based classification. This method could also be used to analyze seasonal changes of LULC types, especially for those associated with tidal changes in coastal areas. The distribution and inventory of LULC types with an overall accuracy of 87.9% and a spatial resolution of 19.5 m can assist regional planning and environmental assessment efficiently in Lianyungang City. This rule-based classification provides a guidance to improve accuracy for coastal areas with distinct LULC temporal spectral features.
Point-of-care wound visioning technology: Reproducibility and accuracy of a wound measurement app
Anderson, John A. E.; Evans, Robyn; Woo, Kevin; Beland, Benjamin; Sasseville, Denis; Moreau, Linda
2017-01-01
Background Current wound assessment practices are lacking on several measures. For example, the most common method for measuring wound size is using a ruler, which has been demonstrated to be crude and inaccurate. An increase in periwound temperature is a classic sign of infection but skin temperature is not always measured during wound assessments. To address this, we have developed a smartphone application that enables non-contact wound surface area and temperature measurements. Here we evaluate the inter-rater reliability and accuracy of this novel point-of-care wound assessment tool. Methods and findings The wounds of 87 patients were measured using the Swift Wound app and a ruler. The skin surface temperature of 37 patients was also measured using an infrared FLIR™ camera integrated with the Swift Wound app and using the clinically accepted reference thermometer Exergen DermaTemp 1001. Accuracy measurements were determined by assessing differences in surface area measurements of 15 plastic wounds between a digital planimeter of known accuracy and the Swift Wound app. To evaluate the impact of training on the reproducibility of the Swift Wound app measurements, three novice raters with no wound care training, measured the length, width and area of 12 plastic model wounds using the app. High inter-rater reliabilities (ICC = 0.97–1.00) and high accuracies were obtained using the Swift Wound app across raters of different levels of training in wound care. The ruler method also yielded reliable wound measurements (ICC = 0.92–0.97), albeit lower than that of the Swift Wound app. Furthermore, there was no statistical difference between the temperature differences measured using the infrared camera and the clinically tested reference thermometer. Conclusions The Swift Wound app provides highly reliable and accurate wound measurements. The FLIR™ infrared camera integrated into the Swift Wound app provides skin temperature readings equivalent to the clinically tested reference thermometer. Thus, the Swift Wound app has the advantage of being a non-contact, easy-to-use wound measurement tool that allows clinicians to image, measure, and track wound size and temperature from one visit to the next. In addition, this tool may also be used by patients and their caregivers for home monitoring. PMID:28817649
Diagnostic accuracy of eye movements in assessing pedophilia.
Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Witzel, Joachim; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo
2012-07-01
Given that recurrent sexual interest in prepubescent children is one of the strongest single predictors for pedosexual offense recidivism, valid and reliable diagnosis of pedophilia is of particular importance. Nevertheless, current assessment methods still fail to fulfill psychometric quality criteria. The aim of the study was to evaluate the diagnostic accuracy of eye-movement parameters in regard to pedophilic sexual preferences. Eye movements were measured while 22 pedophiles (according to ICD-10 F65.4 diagnosis), 8 non-pedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult. Fixation latency was assessed as a parameter for automatic attentional processes and relative fixation time to account for controlled attentional processes. Receiver operating characteristic (ROC) analyses, which are based on calculated age-preference indices, were carried out to determine the classifier performance. Cross-validation using the leave-one-out method was used to test the validity of classifiers. Pedophiles showed significantly shorter fixation latencies and significantly longer relative fixation times for child stimuli than either of the control groups. Classifier performance analysis revealed an area under the curve (AUC) = 0.902 for fixation latency and an AUC = 0.828 for relative fixation time. The eye-tracking method based on fixation latency discriminated between pedophiles and non-pedophiles with a sensitivity of 86.4% and a specificity of 90.0%. Cross-validation demonstrated good validity of eye-movement parameters. Despite some methodological limitations, measuring eye movements seems to be a promising approach to assess deviant pedophilic interests. Eye movements, which represent automatic attentional processes, demonstrated high diagnostic accuracy. © 2012 International Society for Sexual Medicine.
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stephen V. Stehman; Raymond L. Czaplewski; Sarah M. Nusser; Limin Yang; Zhiliang Zhu
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring...
NASA Astrophysics Data System (ADS)
Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe
1999-07-01
In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.
1991-01-01
The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.
Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer
2016-09-01
Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sharma, Kanishka; Caroli, Anna; Quach, Le Van; Petzold, Katja; Bozzetto, Michela; Serra, Andreas L.; Remuzzi, Giuseppe; Remuzzi, Andrea
2017-01-01
Background In autosomal dominant polycystic kidney disease (ADPKD), total kidney volume (TKV) is regarded as an important biomarker of disease progression and different methods are available to assess kidney volume. The purpose of this study was to identify the most efficient kidney volume computation method to be used in clinical studies evaluating the effectiveness of treatments on ADPKD progression. Methods and findings We measured single kidney volume (SKV) on two series of MR and CT images from clinical studies on ADPKD (experimental dataset) by two independent operators (expert and beginner), twice, using all of the available methods: polyline manual tracing (reference method), free-hand manual tracing, semi-automatic tracing, Stereology, Mid-slice and Ellipsoid method. Additionally, the expert operator also measured the kidney length. We compared different methods for reproducibility, accuracy, precision, and time required. In addition, we performed a validation study to evaluate the sensitivity of these methods to detect the between-treatment group difference in TKV change over one year, using MR images from a previous clinical study. Reproducibility was higher on CT than MR for all methods, being highest for manual and semiautomatic contouring methods (planimetry). On MR, planimetry showed highest accuracy and precision, while on CT accuracy and precision of both planimetry and Stereology methods were comparable. Mid-slice and Ellipsoid method, as well as kidney length were fast but provided only a rough estimate of kidney volume. The results of the validation study indicated that planimetry and Stereology allow using an importantly lower number of patients to detect changes in kidney volume induced by drug treatment as compared to other methods. Conclusions Planimetry should be preferred over fast and simplified methods for accurately monitoring ADPKD progression and assessing drug treatment effects. Expert operators, especially on MR images, are required for performing reliable estimation of kidney volume. The use of efficient TKV quantification methods considerably reduces the number of patients to enrol in clinical investigations, making them more feasible and significant. PMID:28558028
Basaki, Kinga; Alkumru, Hasan; De Souza, Grace; Finer, Yoav
To assess the three-dimensional (3D) accuracy and clinical acceptability of implant definitive casts fabricated using a digital impression approach and to compare the results with those of a conventional impression method in a partially edentulous condition. A mandibular reference model was fabricated with implants in the first premolar and molar positions to simulate a patient with bilateral posterior edentulism. Ten implant-level impressions per method were made using either an intraoral scanner with scanning abutments for the digital approach or an open-tray technique and polyvinylsiloxane material for the conventional approach. 3D analysis and comparison of implant location on resultant definitive casts were performed using laser scanner and quality control software. The inter-implant distances and interimplant angulations for each implant pair were measured for the reference model and for each definitive cast (n = 20 per group); these measurements were compared to calculate the magnitude of error in 3D for each definitive cast. The influence of implant angulation on definitive cast accuracy was evaluated for both digital and conventional approaches. Statistical analysis was performed using t test (α = .05) for implant position and angulation. Clinical qualitative assessment of accuracy was done via the assessment of the passivity of a master verification stent for each implant pair, and significance was analyzed using chi-square test (α = .05). A 3D error of implant positioning was observed for the two impression techniques vs the reference model, with mean ± standard deviation (SD) error of 116 ± 94 μm and 56 ± 29 μm for the digital and conventional approaches, respectively (P = .01). In contrast, the inter-implant angulation errors were not significantly different between the two techniques (P = .83). Implant angulation did not have a significant influence on definitive cast accuracy within either technique (P = .64). The verification stent demonstrated acceptable passive fit for 11 out of 20 casts and 18 out of 20 casts for the digital and conventional methods, respectively (P = .01). Definitive casts fabricated using the digital impression approach were less accurate than those fabricated from the conventional impression approach for this simulated clinical scenario. A significant number of definitive casts generated by the digital technique did not meet clinically acceptable accuracy for the fabrication of a multiple implant-supported restoration.
DONG, DAO-RAN; HAO, MEI-NA; LI, CHENG; PENG, ZE; LIU, XIA; WANG, GUI-PING; MA, AN-LIN
2015-01-01
The aim of the present study was to investigate the combination of certain serological markers (Forns’ index; FI), FibroScan® and acoustic radiation force impulse elastography (ARFI) in the assessment of liver fibrosis in patients with hepatitis B, and to explore the impact of inflammatory activity and steatosis on the accuracy of these diagnostic methods. Eighty-one patients who had been diagnosed with hepatitis B were recruited and the stage of fibrosis was determined by biopsy. The diagnostic accuracy of FI, FibroScan and ARFI, as well as that of the combination of these methods, was evaluated based on the conformity of the results from these tests with those of biopsies. The effect of concomitant inflammation on diagnostic accuracy was also investigated by dividing the patients into two groups based on the grade of inflammation (G<2 and G≥2). The overall univariate correlation between steatosis and the diagnostic value of the three methods was also evaluated. There was a significant association between the stage of fibrosis and the results obtained using ARFI and FibroScan (Kruskal-Wallis; P<0.001 for all patients), and FI (t-test, P<0.001 for all patients). The combination of FI with ARFI/FibroScan increased the predictive accuracy with a fibrosis stage of S≥2 or cirrhosis. There was a significant correlation between the grade of inflammation and the results obtained using ARFI and FibroScan (Kruskal-Wallis, P<0.001 for all patients), and FI (t-test; P<0.001 for all patients). No significant correlation was detected between the measurements obtained using ARFI, FibroScan and FI, and steatosis (r=−0.100, P=0.407; r=0.170, P=0.163; and r=0.154, P=0.216, respectively). ARFI was shown to be as effective in the diagnosis of liver fibrosis as FibroScan or FI, and the combination of ARFI or FibroScan with FI may improve the accuracy of diagnosis. The presence of inflammatory activity, but not that of steatosis, may affect the diagnostic accuracy of these methods. PMID:25651500
Dong, Dao-Ran; Hao, Mei-Na; Li, Cheng; Peng, Ze; Liu, Xia; Wang, Gui-Ping; Ma, An-Lin
2015-06-01
The aim of the present study was to investigate the combination of certain serological markers (Forns' index; FI), FibroScan® and acoustic radiation force impulse elastography (ARFI) in the assessment of liver fibrosis in patients with hepatitis B, and to explore the impact of inflammatory activity and steatosis on the accuracy of these diagnostic methods. Eighty‑one patients who had been diagnosed with hepatitis B were recruited and the stage of fibrosis was determined by biopsy. The diagnostic accuracy of FI, FibroScan and ARFI, as well as that of the combination of these methods, was evaluated based on the conformity of the results from these tests with those of biopsies. The effect of concomitant inflammation on diagnostic accuracy was also investigated by dividing the patients into two groups based on the grade of inflammation (G<2 and G≥2). The overall univariate correlation between steatosis and the diagnostic value of the three methods was also evaluated. There was a significant association between the stage of fibrosis and the results obtained using ARFI and FibroScan (Kruskal‑Wallis; P<0.001 for all patients), and FI (t-test, P<0.001 for all patients). The combination of FI with ARFI/FibroScan increased the predictive accuracy with a fibrosis stage of S≥2 or cirrhosis. There was a significant correlation between the grade of inflammation and the results obtained using ARFI and FibroScan (Kruskal‑Wallis, P<0.001 for all patients), and FI (t-test; P<0.001 for all patients). No significant correlation was detected between the measurements obtained using ARFI, FibroScan and FI, and steatosis (r=‑0.100, P=0.407; r=0.170, P=0.163; and r=0.154, P=0.216, respectively). ARFI was shown to be as effective in the diagnosis of liver fibrosis as FibroScan or FI, and the combination of ARFI or FibroScan with FI may improve the accuracy of diagnosis. The presence of inflammatory activity, but not that of steatosis, may affect the diagnostic accuracy of these methods.
Bois, Aaron J; Fening, Stephen D; Polster, Josh; Jones, Morgan H; Miniaci, Anthony
2012-11-01
Glenoid support is critical for stability of the glenohumeral joint. An accepted noninvasive method of quantifying glenoid bone loss does not exist. To perform independent evaluations of the reliability and accuracy of standard 2-dimensional (2-D) and 3-dimensional (3-D) computed tomography (CT) measurements of glenoid bone deficiency. Descriptive laboratory study. Two sawbone models were used; one served as a model for 2 anterior glenoid defects and the other for 2 anteroinferior defects. For each scapular model, predefect and defect data were collected for a total of 6 data sets. Each sample underwent 3-D laser scanning followed by CT scanning. Six physicians measured linear indicators of bone loss (defect length and width-to-length ratio) on both 2-D and 3-D CT and quantified bone loss using the glenoid index method on 2-D CT and using the glenoid index, ratio, and Pico methods on 3-D CT. The intraclass correlation coefficient (ICC) was used to assess agreement, and percentage error was used to compare radiographic and true measurements. With use of 2-D CT, the glenoid index and defect length measurements had the least percentage error (-4.13% and 7.68%, respectively); agreement was very good (ICC, .81) for defect length only. With use of 3-D CT, defect length (0.29%) and the Pico(1) method (4.93%) had the least percentage error. Agreement was very good for all linear indicators of bone loss (range, .85-.90) and for the ratio linear and Pico surface area methods used to quantify bone loss (range, .84-.98). Overall, 3-D CT results demonstrated better agreement and accuracy compared to 2-D CT. None of the methods assessed in this study using 2-D CT was found to be valid, and therefore, 2-D CT is not recommended for these methods. However, the length of glenoid defects can be reliably and accurately measured on 3-D CT. The Pico and ratio techniques are most reliable; however, the Pico(1) method accurately quantifies glenoid bone loss in both the anterior and anteroinferior locations. Future work is required to implement valid imaging techniques of glenoid bone loss into clinical practice. This is one of the only studies to date that has investigated both the reliability and accuracy of multiple indicators and quantification methods that evaluate glenoid bone loss in anterior glenohumeral instability. These data are critical to ensure valid methods are used for preoperative assessment and to determine when a glenoid bone augmentation procedure is indicated.
Comparison of fecal egg counting methods in four livestock species.
Paras, Kelsey L; George, Melissa M; Vidyashankar, Anand N; Kaplan, Ray M
2018-06-15
Gastrointestinal nematode parasites are important pathogens of all domesticated livestock species. Fecal egg counts (FEC) are routinely used for evaluating anthelmintic efficacy and for making targeted anthelmintic treatment decisions. Numerous FEC techniques exist and vary in precision and accuracy. These performance characteristics are especially important when performing fecal egg count reduction tests (FECRT). The objective of this study was to compare the accuracy and precision of three commonly used FEC methods and determine if differences existed among livestock species. In this study, we evaluated the modified-Wisconsin, 3-chamber (high-sensitivity) McMaster, and Mini-FLOTAC methods in cattle, sheep, horses, and llamas in three phases. In the first phase, we performed an egg-spiking study to assess the egg recovery rate and accuracy of the different FEC methods. In the second phase, we examined clinical samples from four different livestock species and completed multiple replicate FEC using each method. In the last phase, we assessed the cheesecloth straining step as a potential source of egg loss. In the egg-spiking study, the Mini-FLOTAC recovered 70.9% of the eggs, which was significantly higher than either the McMaster (P = 0.002) or Wisconsin (P = 0.002). In the clinical samples from ruminants, Mini-FLOTAC consistently yielded the highest EPG, revealing a significantly higher level of egg recovery (P < 0.0001). For horses and llamas, both McMaster and Mini-FLOTAC yielded significantly higher EPG than Wisconsin (P < 0.0001, P < 0.0001, P < 0.001, and P = 0.024). Mini-FLOTAC was the most accurate method and was the most precise test for both species of ruminants. The Wisconsin method was the most precise for horses and McMaster was more precise for llama samples. We compared the Wisconsin and Mini-FLOTAC methods using a modified technique where both methods were performed using either the Mini-FLOTAC sieve or cheesecloth. The differences in the estimated mean EPG on log scale between the Wisconsin and mini-FLOTAC methods when cheesecloth was used (P < 0.0001) and when cheesecloth was excluded (P < 0.0001) were significant, providing strong evidence that the straining step is an important source of error. The high accuracy and precision demonstrated in this study for the Mini-FLOTAC, suggest that this method can be recommended for routine use in all host species. The benefits of Mini-FLOTAC will be especially relevant when high accuracy is important, such as when performing FECRT. Copyright © 2018 Elsevier B.V. All rights reserved.
The U.S. Department of Agriculture Automated Multiple-Pass Method accurately assesses sodium intakes
USDA-ARS?s Scientific Manuscript database
Accurate and practical methods to monitor sodium intake of the U.S. population are critical given current sodium reduction strategies. While the gold standard for estimating sodium intake is the 24 hour urine collection, few studies have used this biomarker to evaluate the accuracy of a dietary ins...
Pearson, Lauren; Factor, Rachel E; White, Sandra K; Walker, Brandon S; Layfield, Lester J; Schmidt, Robert L
2018-06-06
Rapid on-site evaluation (ROSE) has been shown to improve adequacy rates and reduce needle passes. ROSE is often performed by cytopathologists who have limited availability and may be costlier than alternatives. Several recent studies examined the use of alternative evaluators (AEs) for ROSE. A summary of this information could help inform guidelines regarding the use of AEs. The objective was to assess the accuracy of AEs compared to cytopathologists in assessing the adequacy of specimens during ROSE. This was a systematic review and meta-analysis. Reporting and study quality were assessed using the STARD guidelines and QUADAS-2. All steps were performed independently by two evaluators. Summary estimates were obtained using the hierarchal method in Stata v14. Heterogeneity was evaluated using Higgins' I2 statistic. The systematic review identified 13 studies that were included in the meta-analysis. Summary estimates of sensitivity and specificity for AEs were 97% (95% CI: 92-99%) and 83% (95% CI: 68-92%). There was wide variation in accuracy statistics between studies (I2 = 0.99). AEs sometimes have accuracy that is close to cytopathologists. However, there is wide variability between studies, so it is not possible to provide a broad guideline regarding the use of AEs. © 2018 S. Karger AG, Basel.
Validity of Newborn Clinical Assessment to Determine Gestational Age in Bangladesh.
Lee, Anne Cc; Mullany, Luke C; Ladhani, Karima; Uddin, Jamal; Mitra, Dipak; Ahmed, Parvez; Christian, Parul; Labrique, Alain; DasGupta, Sushil K; Lokken, R Peter; Quaiyum, Mohammed; Baqui, Abdullah H
2016-07-01
Gestational age (GA) is frequently unknown or inaccurate in pregnancies in low-income countries. Early identification of preterm infants may help link them to potentially life-saving interventions. We conducted a validation study in a community-based birth cohort in rural Bangladesh. GA was determined by pregnancy ultrasound (<20 weeks). Community health workers conducted home visits (<72 hours) to assess physical/neuromuscular signs and measure anthropometrics. The distribution, agreement, and diagnostic accuracy of different clinical methods of GA assessment were determined compared with early ultrasound dating. In the live-born cohort (n = 1066), the mean ultrasound GA was 39.1 weeks (SD 2.0) and prevalence of preterm birth (<37 weeks) was 11.4%. Among assessed newborns (n = 710), the mean ultrasound GA was 39.3 weeks (SD 1.6) (8.3% preterm) and by Ballard scoring the mean GA was 38.9 weeks (SD 1.7) (12.9% preterm). The average bias of the Ballard was -0.4 weeks; however, 95% limits of agreement were wide (-4.7 to 4.0 weeks) and the accuracy for identifying preterm infants was low (sensitivity 16%, specificity 87%). Simplified methods for GA assessment had poor diagnostic accuracy for identifying preterm births (community health worker prematurity scorecard [sensitivity/specificity: 70%/27%]; Capurro [5%/96%]; Eregie [75%/58%]; Bhagwat [18%/87%], foot length <75 mm [64%/35%]; birth weight <2500 g [54%/82%]). Neonatal anthropometrics had poor to fair performance for classifying preterm infants (areas under the receiver operating curve 0.52-0.80). Newborn clinical assessment of GA is challenging at the community level in low-resource settings. Anthropometrics are also inaccurate surrogate markers for GA in settings with high rates of fetal growth restriction. Copyright © 2016 by the American Academy of Pediatrics.
Pine, P S; Boedigheimer, M; Rosenzweig, B A; Turpaz, Y; He, Y D; Delenstarr, G; Ganter, B; Jarnagin, K; Jones, W D; Reid, L H; Thompson, K L
2008-11-01
Effective use of microarray technology in clinical and regulatory settings is contingent on the adoption of standard methods for assessing performance. The MicroArray Quality Control project evaluated the repeatability and comparability of microarray data on the major commercial platforms and laid the groundwork for the application of microarray technology to regulatory assessments. However, methods for assessing performance that are commonly applied to diagnostic assays used in laboratory medicine remain to be developed for microarray assays. A reference system for microarray performance evaluation and process improvement was developed that includes reference samples, metrics and reference datasets. The reference material is composed of two mixes of four different rat tissue RNAs that allow defined target ratios to be assayed using a set of tissue-selective analytes that are distributed along the dynamic range of measurement. The diagnostic accuracy of detected changes in expression ratios, measured as the area under the curve from receiver operating characteristic plots, provides a single commutable value for comparing assay specificity and sensitivity. The utility of this system for assessing overall performance was evaluated for relevant applications like multi-laboratory proficiency testing programs and single-laboratory process drift monitoring. The diagnostic accuracy of detection of a 1.5-fold change in signal level was found to be a sensitive metric for comparing overall performance. This test approaches the technical limit for reliable discrimination of differences between two samples using this technology. We describe a reference system that provides a mechanism for internal and external assessment of laboratory proficiency with microarray technology and is translatable to performance assessments on other whole-genome expression arrays used for basic and clinical research.
Sánchez-Rodríguez, Dolores; Annweiler, Cédric; Ronquillo-Moreno, Natalia; Tortosa-Rodríguez, Andrea; Guillén-Solà, Anna; Vázquez-Ibar, Olga; Escalada, Ferran; Muniesa, Josep M; Marco, Ester
Malnutrition is a prevalent condition related to adverse outcomes in older people. Our aim was to compare the diagnostic capacity of the malnutrition criteria of the European Society of Parenteral and Enteral Nutrition (ESPEN) with other classical diagnostic tools. Cohort study of 102 consecutive in-patients ≥70 years admitted for postacute rehabilitation. Patients were considered malnourished if their Mini-Nutritional Assessment-Short Form (MNA-SF) score was ≤11 and serum albumin <3 mg/dL or MNA-SF ≤ 11, serum albumin <3 mg/dL, and usual clinical signs and symptoms of malnutrition. Sensitivity, specificity, positive and negative predictive values, accuracy likelihood ratios, and kappa values were calculated for both methods: and compared with ESPEN consensus. Of 102 eligible in-patients, 88 fulfilled inclusion criteria and were identified as "at risk" by MNA-SF. Malnutrition diagnosis was confirmed in 11.6% and 10.5% of the patients using classical methods,whereas 19.3% were malnourished according to the ESPEN criteria. Combined with low albumin levels, the diagnosis showed 57.9% sensitivity, 64.5% specificity, 85.9% negative predictive value,0.63 accuracy (fair validity, low range), and kappa index of 0.163 (poor ESPEN agreement). The combination of MNA-SF, low albumin, and clinical malnutrition showed 52.6% sensitivity, 88.3% specificity, 88.3%negative predictive value, and 0.82 accuracy (fair validity, low range), and kappa index of 0.43 (fair ESPEN agreement). Malnutrition was almost twice as prevalent when diagnosed by the ESPEN consensus, compared to classical assessment methods: Classical methods: showed fair validity and poor agreement with the ESPEN consensus in assessing malnutrition in geriatric postacute care. Copyright © 2018 Elsevier B.V. All rights reserved.
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis
Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-01-01
Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122
NASA Astrophysics Data System (ADS)
Robleda Prieto, G.; Pérez Ramos, A.
2015-02-01
Sometimes it could be difficult to represent "on paper" an architectural idea, a solution, a detail or a newly created element, depending on the complexity what it want be conveyed through its graphical representation but it may be even harder to represent the existing reality. (a building, a detail,...), at least with an acceptable degree of definition and accuracy. As a solution to this hypothetical problem, this paper try to show a methodology to collect measure data by combining different methods or techniques, to obtain the characteristic geometry of architectonic elements, especially in those highly decorated and/or complex geometry, as well as to assess the accuracy of the results obtained, but in an accuracy level enough and not very expensive costs. In addition, we can obtain a 3D recovery model that allows us a strong support, beyond point clouds obtained through another more expensive methods as using laser scanner, to obtain orthoimages. This methodology was used in the study case of the 3D-virtual reconstruction of a main medieval church façade because of the geometrical complexity in many elements as the existing main doorway with archivolts and many details, as well as the rose window located above it so it's inaccessible due to the height.
NASA Astrophysics Data System (ADS)
Liu, Tao; Im, Jungho; Quackenbush, Lindi J.
2015-12-01
This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.
ERIC Educational Resources Information Center
Petersen, Douglas B.; Chanthongthip, Helen; Ukrainetz, Teresa A.; Spencer, Trina D.; Steeve, Roger W.
2017-01-01
Purpose: This study investigated the classification accuracy of a concentrated English narrative dynamic assessment (DA) for identifying language impairment (LI). Method: Forty-two Spanish-English bilingual kindergarten to third-grade children (10 LI and 32 with no LI) were administered two 25-min DA test-teach-test sessions. Pre- and posttest…
ERIC Educational Resources Information Center
Henry, Beverly W.; Smith, Thomas J.
2010-01-01
Objective: To develop an instrument to assess client-centered counseling behaviors (skills) of student-counselors in a standardized patient (SP) exercise. Methods: Descriptive study of the accuracy and utility of a newly developed counseling evaluation instrument. Study participants included 11 female student-counselors at a Midwestern…
Lee, Jack; Zee, Benny Chung Ying; Li, Qing
2013-01-01
Diabetic retinopathy is a major cause of blindness. Proliferative diabetic retinopathy is a result of severe vascular complication and is visible as neovascularization of the retina. Automatic detection of such new vessels would be useful for the severity grading of diabetic retinopathy, and it is an important part of screening process to identify those who may require immediate treatment for their diabetic retinopathy. We proposed a novel new vessels detection method including statistical texture analysis (STA), high order spectrum analysis (HOS), fractal analysis (FA), and most importantly we have shown that by incorporating their associated interactions the accuracy of new vessels detection can be greatly improved. To assess its performance, the sensitivity, specificity and accuracy (AUC) are obtained. They are 96.3%, 99.1% and 98.5% (99.3%), respectively. It is found that the proposed method can improve the accuracy of new vessels detection significantly over previous methods. The algorithm can be automated and is valuable to detect relatively severe cases of diabetic retinopathy among diabetes patients.
Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures
NASA Astrophysics Data System (ADS)
Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar
2017-02-01
A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.
Evaluation of 4D-CT lung registration.
Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W
2009-01-01
Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.
Caird, Jeff K; Edwards, Christopher J; Creaser, Janet I; Horrey, William J
2005-01-01
A modified version of the flicker technique to induce change blindness was used to examine the effects of time constraints on decision-making accuracy at intersections on a total of 62 young (18-25 years), middle-aged (26-64 years), young-old (65-73 years), and old-old (74+ years) drivers. Thirty-six intersection photographs were manipulated so that one object (i.e., pedestrian, vehicle, sign, or traffic control device) in the scene would change when the images were alternated for either 5 or 8 s using the modified flicker method. Young and middle-aged drivers made significantly more correct decisions than did young-old and old-old drivers. Logistic regression analysis of the data indicated that age and/or time were significant predictors of decision performance in 14 of the 36 intersections. Actual or potential applications of this research include driving assessment and crash investigation.
Land use change detection based on multi-date imagery from different satellite sensor systems
NASA Technical Reports Server (NTRS)
Stow, Douglas A.; Collins, Doretta; Mckinsey, David
1990-01-01
An empirical study is conducted to assess the accuracy of land use change detection using satellite image data acquired ten years apart by sensors with differing spatial resolutions. The primary goals of the investigation were to (1) compare standard change detection methods applied to image data of varying spatial resolution, (2) assess whether to transform the raster grid of the higher resolution image data to that of the lower resolution raster grid or vice versa in the registration process, (3) determine if Landsat/Thermatic Mapper or SPOT/High Resolution Visible multispectral data provide more accurate detection of land use changes when registered to historical Landsat/MSS data. It is concluded that image ratioing of multisensor, multidate satellite data produced higher change detection accuracies than did principal components analysis, and that it is useful as a land use change enhancement method.
Diagnostic accuracy of MRI in the measurement of glenoid bone loss.
Gyftopoulos, Soterios; Hasan, Saqib; Bencardino, Jenny; Mayo, Jason; Nayyar, Samir; Babb, James; Jazrawi, Laith
2012-10-01
The purpose of this study is to assess the accuracy of MRI quantification of glenoid bone loss and to compare the diagnostic accuracy of MRI to CT in the measurement of glenoid bone loss. MRI, CT, and 3D CT examinations of 18 cadaveric glenoids were obtained after the creation of defects along the anterior and anteroinferior glenoid. The defects were measured by three readers separately and blindly using the circle method. These measurements were compared with measurements made on digital photographic images of the cadaveric glenoids. Paired sample Student t tests were used to compare the imaging modalities. Concordance correlation coefficients were also calculated to measure interobserver agreement. Our data show that MRI could be used to accurately measure glenoid bone loss with a small margin of error (mean, 3.44%; range, 2.06-5.94%) in estimated percentage loss. MRI accuracy was similar to that of both CT and 3D CT for glenoid loss measurements in our study for the readers familiar with the circle method, with 1.3% as the maximum expected difference in accuracy of the percentage bone loss between the different modalities (95% confidence). Glenoid bone loss can be accurately measured on MRI using the circle method. The MRI quantification of glenoid bone loss compares favorably to measurements obtained using 3D CT and CT. The accuracy of the measurements correlates with the level of training, and a learning curve is expected before mastering this technique.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
NASA Astrophysics Data System (ADS)
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-01
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Is STAPLE algorithm confident to assess segmentation methods in PET imaging?
Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien
2015-12-21
Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians' manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-01-01
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206
Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.
Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang
2016-06-08
A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.
Zhang, Junhua; Wang, Yuanyuan; Shi, Xinling
2009-12-01
A modified graph cut was proposed under the elliptical shape constraint to segment cervical lymph nodes on sonograms, and its effect on the measurement of short axis to long axis ratio (S/L) was investigated by using the relative ultimate measurement accuracy (RUMA). Under the same user inputs, the proposed algorithm successfully segmented all 60 sonograms tested, while the traditional graph cut failed. The mean RUMA resulted from the developed method was comparable to that resulted from the manual segmentation. Results indicated that utilizing the elliptical shape prior could appreciably improve the graph cut for nodes segmentation, and the proposed method satisfied the accuracy requirement of S/L measurement.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
[Dental arch form reverting by four-point method].
Pan, Xiao-Gang; Qian, Yu-Fen; Weng, Si-En; Feng, Qi-Ping; Yu, Quan
2008-04-01
To explore a simple method of reverting individual dental arch form template for wire bending. Individual dental arch form was reverted by four-point method. By defining central point of bracket on bilateral lower second premolar and first molar, certain individual dental arch form could be generated. The arch form generating procedure was then be developed to computer software for printing arch form. Four-point method arch form was evaluated by comparing with direct model measurement on linear and angular parameters. The accuracy and reproducibility were assessed by paired t test and concordance correlation coefficient with Medcalc 9.3 software package. The arch form by four-point method was of good accuracy and reproducibility (linear concordance correlation coefficient was 0.9909 and angular concordance correlation coefficient was 0.8419). The dental arch form reverted by four-point method could reproduce the individual dental arch form.
New methods for analyzing semantic graph based assessments in science education
NASA Astrophysics Data System (ADS)
Vikaros, Lance Steven
This research investigated how the scoring of semantic graphs (known by many as concept maps) could be improved and automated in order to address issues of inter-rater reliability and scalability. As part of the NSF funded SENSE-IT project to introduce secondary school science students to sensor networks (NSF Grant No. 0833440), semantic graphs illustrating how temperature change affects water ecology were collected from 221 students across 16 schools. The graphing task did not constrain students' use of terms, as is often done with semantic graph based assessment due to coding and scoring concerns. The graphing software used provided real-time feedback to help students learn how to construct graphs, stay on topic and effectively communicate ideas. The collected graphs were scored by human raters using assessment methods expected to boost reliability, which included adaptations of traditional holistic and propositional scoring methods, use of expert raters, topical rubrics, and criterion graphs. High levels of inter-rater reliability were achieved, demonstrating that vocabulary constraints may not be necessary after all. To investigate a new approach to automating the scoring of graphs, thirty-two different graph features characterizing graphs' structure, semantics, configuration and process of construction were then used to predict human raters' scoring of graphs in order to identify feature patterns correlated to raters' evaluations of graphs' topical accuracy and complexity. Results led to the development of a regression model able to predict raters' scoring with 77% accuracy, with 46% accuracy expected when used to score new sets of graphs, as estimated via cross-validation tests. Although such performance is comparable to other graph and essay based scoring systems, cross-context testing of the model and methods used to develop it would be needed before it could be recommended for widespread use. Still, the findings suggest techniques for improving the reliability and scalability of semantic graph based assessments without requiring constraint of how ideas are expressed.
Shulman, Stanley A; Smith, Jerome P
2002-01-01
A method is presented for the evaluation of the bias, variability, and accuracy of gas monitors. This method is based on using the parameters for the fitted response curves of the monitors. Thereby, variability between calibrations, between dates within each calibration period, and between different units can be evaluated at several different standard concentrations. By combining variability information with bias information, accuracy can be assessed. An example using carbon monoxide monitor data is provided. Although the most general statistical software required for these tasks is not available on a spreadsheet, when the same number of dates in a calibration period are evaluated for each monitor unit, the calculations can be done on a spreadsheet. An example of such calculations, together with the formulas needed for their implementation, is provided. In addition, the methods can be extended by use of appropriate statistical models and software to evaluate monitor trends within calibration periods, as well as consider the effects of other variables, such as humidity and temperature, on monitor variability and bias.
Brain collection, standardized neuropathologic assessment, and comorbidity in ADNI participants
Franklin, Erin E.; Perrin, Richard J.; Vincent, Benjamin; Baxter, Michael; Morris, John C.; Cairns, Nigel J.
2015-01-01
Introduction The Alzheimer’s Disease Neuroimaging Initiative Neuropathology Core (ADNI-NPC) facilitates brain donation, ensures standardized neuropathologic assessments, and maintains a tissue resource for research. Methods The ADNI-NPC coordinates with performance sites to promote autopsy consent, facilitate tissue collection and autopsy administration, and arrange sample delivery to the NPC, for assessment using NIA-AA neuropathologic diagnostic criteria. Results The ADNI-NPC has obtained 45 participant specimens and neuropathologic assessments have been completed in 36 to date. Challenges in obtaining consent at some sites have limited the voluntary autopsy rate to 58%. Among assessed cases, clinical diagnostic accuracy for Alzheimer disease (AD) is 97%; however, 58% show neuropathologic comorbidities. Discussion Challenges facing autopsy consent and coordination are largely resource-related. The neuropathologic assessments indicate that ADNI’s clinical diagnostic accuracy for AD is high; however, many AD cases have comorbidities that may impact the clinical presentation, course, and imaging and biomarker results. These neuropathologic data permit multimodal and genetic studies of these comorbidities to improve diagnosis and provide etiologic insights. PMID:26194314
MRI EVALUATION OF KNEE CARTILAGE
Rodrigues, Marcelo Bordalo; Camanho, Gilberto Luís
2015-01-01
Through the ability of magnetic resonance imaging (MRI) to characterize soft tissue noninvasively, it has become an excellent method for evaluating cartilage. The development of new and faster methods allowed increased resolution and contrast in evaluating chondral structure, with greater diagnostic accuracy. In addition, physiological techniques for cartilage assessment that can detect early changes before the appearance of cracks and erosion have been developed. In this updating article, the various techniques for chondral assessment using knee MRI will be discussed and demonstrated. PMID:27022562
NASA Astrophysics Data System (ADS)
Zhao, Dekang; Wu, Qiang; Cui, Fangpeng; Xu, Hua; Zeng, Yifan; Cao, Yufei; Du, Yuanze
2018-04-01
Coal-floor water-inrush incidents account for a large proportion of coal mine disasters in northern China, and accurate risk assessment is crucial for safe coal production. A novel and promising assessment model for water inrush is proposed based on random forest (RF), which is a powerful intelligent machine-learning algorithm. RF has considerable advantages, including high classification accuracy and the capability to evaluate the importance of variables; in particularly, it is robust in dealing with the complicated and non-linear problems inherent in risk assessment. In this study, the proposed model is applied to Panjiayao Coal Mine, northern China. Eight factors were selected as evaluation indices according to systematic analysis of the geological conditions and a field survey of the study area. Risk assessment maps were generated based on RF, and the probabilistic neural network (PNN) model was also used for risk assessment as a comparison. The results demonstrate that the two methods are consistent in the risk assessment of water inrush at the mine, and RF shows a better performance compared to PNN with an overall accuracy higher by 6.67%. It is concluded that RF is more practicable to assess the water-inrush risk than PNN. The presented method will be helpful in avoiding water inrush and also can be extended to various engineering applications.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.
2016-01-01
Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796
Middleton, Michael S; Haufe, William; Hooker, Jonathan; Borga, Magnus; Dahlqvist Leinhard, Olof; Romu, Thobias; Tunón, Patrik; Hamilton, Gavin; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B
2017-05-01
Purpose To determine the repeatability and accuracy of a commercially available magnetic resonance (MR) imaging-based, semiautomated method to quantify abdominal adipose tissue and thigh muscle volume and hepatic proton density fat fraction (PDFF). Materials and Methods This prospective study was institutional review board- approved and HIPAA compliant. All subjects provided written informed consent. Inclusion criteria were age of 18 years or older and willingness to participate. The exclusion criterion was contraindication to MR imaging. Three-dimensional T1-weighted dual-echo body-coil images were acquired three times. Source images were reconstructed to generate water and calibrated fat images. Abdominal adipose tissue and thigh muscle were segmented, and their volumes were estimated by using a semiautomated method and, as a reference standard, a manual method. Hepatic PDFF was estimated by using a confounder-corrected chemical shift-encoded MR imaging method with hybrid complex-magnitude reconstruction and, as a reference standard, MR spectroscopy. Tissue volume and hepatic PDFF intra- and interexamination repeatability were assessed by using intraclass correlation and coefficient of variation analysis. Tissue volume and hepatic PDFF accuracy were assessed by means of linear regression with the respective reference standards. Results Adipose and thigh muscle tissue volumes of 20 subjects (18 women; age range, 25-76 years; body mass index range, 19.3-43.9 kg/m 2 ) were estimated by using the semiautomated method. Intra- and interexamination intraclass correlation coefficients were 0.996-0.998 and coefficients of variation were 1.5%-3.6%. For hepatic MR imaging PDFF, intra- and interexamination intraclass correlation coefficients were greater than or equal to 0.994 and coefficients of variation were less than or equal to 7.3%. In the regression analyses of manual versus semiautomated volume and spectroscopy versus MR imaging, PDFF slopes and intercepts were close to the identity line, and correlations of determination at multivariate analysis (R 2 ) ranged from 0.744 to 0.994. Conclusion This MR imaging-based, semiautomated method provides high repeatability and accuracy for estimating abdominal adipose tissue and thigh muscle volumes and hepatic PDFF. © RSNA, 2017.
Portable Electronic Nose Based on Electrochemical Sensors for Food Quality Assessment
Dymerski, Tomasz; Gębicki, Jacek; Namieśnik, Jacek
2017-01-01
The steady increase in global consumption puts a strain on agriculture and might lead to a decrease in food quality. Currently used techniques of food analysis are often labour-intensive and time-consuming and require extensive sample preparation. For that reason, there is a demand for novel methods that could be used for rapid food quality assessment. A technique based on the use of an array of chemical sensors for holistic analysis of the sample’s headspace is called electronic olfaction. In this article, a prototype of a portable, modular electronic nose intended for food analysis is described. Using the SVM method, it was possible to classify samples of poultry meat based on shelf-life with 100% accuracy, and also samples of rapeseed oil based on the degree of thermal degradation with 100% accuracy. The prototype was also used to detect adulterations of extra virgin olive oil with rapeseed oil with 82% overall accuracy. Due to the modular design, the prototype offers the advantages of solutions targeted for analysis of specific food products, at the same time retaining the flexibility of application. Furthermore, its portability allows the device to be used at different stages of the production and distribution process. PMID:29186754
David M. Bell; Matthew J. Gregory; Heather M. Roberts; Raymond J. Davis; Janet L. Ohmann
2015-01-01
Accuracy assessments of remote sensing products are necessary for identifying map strengths and weaknesses in scientific and management applications. However, not all accuracy assessments are created equal. Motivated by a recent study published in Forest Ecology and Management (Volume 342, pages 8â20), we explored the potential limitations of accuracy assessments...
Men, Hong; Fu, Songlin; Yang, Jialin; Cheng, Meiqi; Shi, Yan; Liu, Jingjing
2018-01-18
Paraffin odor intensity is an important quality indicator when a paraffin inspection is performed. Currently, paraffin odor level assessment is mainly dependent on an artificial sensory evaluation. In this paper, we developed a paraffin odor analysis system to classify and grade four kinds of paraffin samples. The original feature set was optimized using Principal Component Analysis (PCA) and Partial Least Squares (PLS). Support Vector Machine (SVM), Random Forest (RF), and Extreme Learning Machine (ELM) were applied to three different feature data sets for classification and level assessment of paraffin. For classification, the model based on SVM, with an accuracy rate of 100%, was superior to that based on RF, with an accuracy rate of 98.33-100%, and ELM, with an accuracy rate of 98.01-100%. For level assessment, the R² related to the training set was above 0.97 and the R² related to the test set was above 0.87. Through comprehensive comparison, the generalization of the model based on ELM was superior to those based on SVM and RF. The scoring errors for the three models were 0.0016-0.3494, lower than the error of 0.5-1.0 measured by industry standard experts, meaning these methods have a higher prediction accuracy for scoring paraffin level.
Edge method for on-orbit defocus assessment.
Viallefont-Robinet, Françoise
2010-09-27
In the earth observation domain, two classes of sensors may be distinguished: a class for which sensor performances are driven by radiometric accuracy of the images and a class for which sensor performances are driven by spatial resolution. In this latter case, as spatial resolution depends on the triplet constituted by the Ground Sampling Distance (GSD), Modulation Transfer Function (MTF), and Signal to Noise Ratio (SNR), refocusing, acting as an MTF improvement, is very important. Refocusing is not difficult by itself as far as the on-board mechanism is reliable. The difficulty is on the defocus assessment side. Some methods such as those used for the SPOT family rely on the ability of the satellite to image the same landscape with two focusing positions. This can be done with a bi-sensor configuration, with adequate focal plane, or with the satellite agility. A new generation of refocusing mechanism will be taken aboard Pleiades. As the speed of this mechanism will be much slower than the speed of the older generation, it won't be possible, despite the agility of the satellite, to image the same landscape with two focusing positions on the same orbit. That's why methods relying on MTF measurement with edge method have been studied. This paper describes the methods and the work done to assess the defocus measurement accuracy in the Pleiades context.
NASA Astrophysics Data System (ADS)
Shoji, J.; Sugimoto, R.; Honda, H.; Tominaga, O.; Taniguchi, M.
2014-12-01
In the past decade, machine-learning methods for empirical rainfall-runoff modeling have seen extensive development. However, the majority of research has focused on a small number of methods, such as artificial neural networks, while not considering other approaches for non-parametric regression that have been developed in recent years. These methods may be able to achieve comparable predictive accuracy to ANN's and more easily provide physical insights into the system of interest through evaluation of covariate influence. Additionally, these methods could provide a straightforward, computationally efficient way of evaluating climate change impacts in basins where data to support physical hydrologic models is limited. In this paper, we use multiple regression and machine-learning approaches to predict monthly streamflow in five highly-seasonal rivers in the highlands of Ethiopia. We find that generalized additive models, random forests, and cubist models achieve better predictive accuracy than ANNs in many basins assessed and are also able to outperform physical models developed for the same region. We discuss some challenges that could hinder the use of such models for climate impact assessment, such as biases resulting from model formulation and prediction under extreme climate conditions, and suggest methods for preventing and addressing these challenges. Finally, we demonstrate how predictor variable influence can be assessed to provide insights into the physical functioning of data-sparse watersheds.
Fan, Chunlin; Deng, Jiewei; Yang, Yunyun; Liu, Junshan; Wang, Ying; Zhang, Xiaoqi; Fai, Kuokchiu; Zhang, Qingwen; Ye, Wencai
2013-10-01
An ultra-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC-QTOF-MS) method integrating multi-ingredients determination and fingerprint analysis has been established for quality assessment and control of leaves from Ilex latifolia. The method possesses the advantages of speediness, efficiency, accuracy, and allows the multi-ingredients determination and fingerprint analysis in one chromatographic run within 13min. Multi-ingredients determination was performed based on the extracted ion chromatograms of the exact pseudo-molecular ions (with a 0.01Da window), and fingerprint analysis was performed based on the base peak chromatograms, obtained by negative-ion electrospray ionization QTOF-MS. The method validation results demonstrated our developed method possessing desirable specificity, linearity, precision and accuracy. The method was utilized to analyze 22 I. latifolia samples from different origins. The quality assessment was achieved by using both similarity analysis (SA) and principal component analysis (PCA), and the results from SA were consistent with those from PCA. Our experimental results demonstrate that the strategy integrated multi-ingredients determination and fingerprint analysis using UPLC-QTOF-MS technique is a useful approach for rapid pharmaceutical analysis, with promising prospects for the differentiation of origin, the determination of authenticity, and the overall quality assessment of herbal medicines. Copyright © 2013 Elsevier B.V. All rights reserved.
Accuracy of the Broselow Tape in South Sudan, "The Hungriest Place on Earth".
Clark, Melissa C; Lewis, Roger J; Fleischman, Ross J; Ogunniyi, Adedamola A; Patel, Dipesh S; Donaldson, Ross I
2016-01-01
The Broselow tape is a length-based tool used for the rapid estimation of pediatric weight and was developed to reduce dosage-related errors during emergencies. This study seeks to assess the accuracy of the Broselow tape and age-based formulas in predicting weights of South Sudanese children of varying nutritional status. This was a retrospective, cross-sectional study using data from existing acute malnutrition screening programs for children less than 5 years of age in South Sudan. Using anthropometric measurements, actual weights were compared with estimated weights from the Broselow tape and three age-based formulas. Mid-upper arm circumference was used to determine if each child was malnourished. Broselow accuracy was assessed by the percentage of measured weights falling into the same color zone as the predicted weight. For each method, accuracy was assessed by mean percentage error and percentage of predicted weights falling within 10% of actual weight. All data were analyzed by nutritional status subgroup. Only 10.7% of malnourished and 26.6% of nonmalnourished children had their actual weight fall within the Broselow color zone corresponding to their length. The Broselow method overestimated weight by a mean of 26.6% in malnourished children and 16.6% in nonmalnourished children (p < 0.001). Age-based formulas also overestimated weight, with mean errors ranging from 16.2% over actual weight (Advanced Pediatric Life Support in nonmalnourished children) to 70.9% over actual (Best Guess in severely malnourished children). The Broselow tape and age-based formulas selected for comparison were all markedly inaccurate in both the nonmalnourished and the malnourished populations studied, worsening with increasing malnourishment. Additional studies should explore appropriate methods of weight and dosage estimation for populations of low- and low-to-middle-income countries and regions with a high prevalence of malnutrition. © 2015 by the Society for Academic Emergency Medicine.
ERIC Educational Resources Information Center
Moses, Tim; Miao, Jing; Dorans, Neil
2010-01-01
This study compared the accuracies of four differential item functioning (DIF) estimation methods, where each method makes use of only one of the following: raw data, logistic regression, loglinear models, or kernel smoothing. The major focus was on the estimation strategies' potential for estimating score-level, conditional DIF. A secondary focus…
The reliability of the pass/fail decision for assessments comprised of multiple components.
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.
Real-Time Tropospheric Product Establishment and Accuracy Assessment in China
NASA Astrophysics Data System (ADS)
Chen, M.; Guo, J.; Wu, J.; Song, W.; Zhang, D.
2018-04-01
Tropospheric delay has always been an important issue in Global Navigation Satellite System (GNSS) processing. Empirical tropospheric delay models are difficult to simulate complex and volatile atmospheric environments, resulting in poor accuracy of the empirical model and difficulty in meeting precise positioning demand. In recent years, some scholars proposed to establish real-time tropospheric product by using real-time or near-real-time GNSS observations in a small region, and achieved some good results. This paper uses real-time observing data of 210 Chinese national GNSS reference stations to estimate the tropospheric delay, and establishes ZWD grid model in the country wide. In order to analyze the influence of tropospheric grid product on wide-area real-time PPP, this paper compares the method of taking ZWD grid product as a constraint with the model correction method. The results show that the ZWD grid product estimated based on the national reference stations can improve PPP accuracy and convergence speed. The accuracy in the north (N), east (E) and up (U) direction increase by 31.8 %,15.6 % and 38.3 %, respectively. As with the convergence speed, the accuracy of U direction experiences the most improvement.
Płachcińska, Anna; Mikołajczak, Renata; Kozak, Józef; Rzeszutek, Katarzyna; Kuśmierek, Jacek
2006-09-01
The aim of the study was to determine an optimal method for the evaluation of scintigrams obtained with (99m)Tc-EDDA/HYNIC-TOC for the purpose of differential diagnosis of solitary pulmonary nodules (SPNs) and to assess the diagnostic value of the method. Eighty-five patients (48 males and 37 females, mean age 57 years, range 34-78 years) were enrolled in the study. Patients underwent (99m)Tc-EDDA/HYNIC-TOC scintigraphy for the purpose of differential diagnosis of SPNs (size between 1 and 4 cm). Images of all patients were evaluated visually in a prospective manner. Positive scintigraphic results were found in 37 out of 40 (93%) patients with malignant SPNs including 34 out of 35 (97%) patients with primary lung carcinoma. Two remaining false negative cases turned out to be metastatic lesions of malignant melanoma and leiomyosarcoma. Among 45 benign tumours, negative results were obtained in 31 cases (69%) and positive results in 14. The accuracy of the method was 80%. Analysis of the results of the visual assessment of scintigrams revealed a significantly higher frequency of false positive results among larger nodules (diameter at least 1.4 cm). Uptake of the tracer in those nodules was therefore assessed semi-quantitatively (using the tumour-to-background ratio), in expectation of an improvement in the low specificity of the visual method. The semi-quantitative assessment reduced the total number of false positive results in a subgroup of larger nodules from 13 to six, while preserving the high sensitivity of the method. The combination of visual analysis (for lesions smaller than 1.4 cm in diameter) and semi-quantitative assessment (for larger lesions) provided a high sensitivity of the method and significantly improved its specificity (84%) and accuracy (88%) in comparison with visual analysis (p<0.05).
Machine learning methods for credibility assessment of interviewees based on posturographic data.
Saripalle, Sashi K; Vemulapalli, Spandana; King, Gregory W; Burgoon, Judee K; Derakhshani, Reza
2015-01-01
This paper discusses the advantages of using posturographic signals from force plates for non-invasive credibility assessment. The contributions of our work are two fold: first, the proposed method is highly efficient and non invasive. Second, feasibility for creating an autonomous credibility assessment system using machine-learning algorithms is studied. This study employs an interview paradigm that includes subjects responding with truthful and deceptive intent while their center of pressure (COP) signal is being recorded. Classification models utilizing sets of COP features for deceptive responses are derived and best accuracy of 93.5% for test interval is reported.
A low-cost tracked C-arm (TC-arm) upgrade system for versatile quantitative intraoperative imaging.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Anglin, Carolyn
2014-07-01
C-arm fluoroscopy is frequently used in clinical applications as a low-cost and mobile real-time qualitative assessment tool. C-arms, however, are not widely accepted for applications involving quantitative assessments, mainly due to the lack of reliable and low-cost position tracking methods, as well as adequate calibration and registration techniques. The solution suggested in this work is a tracked C-arm (TC-arm) which employs a low-cost sensor tracking module that can be retrofitted to any conventional C-arm for tracking the individual joints of the device. Registration and offline calibration methods were developed that allow accurate tracking of the gantry and determination of the exact intrinsic and extrinsic parameters of the imaging system for any acquired fluoroscopic image. The performance of the system was evaluated in comparison to an Optotrak[Formula: see text] motion tracking system and by a series of experiments on accurately built ball-bearing phantoms. Accuracies of the system were determined for 2D-3D registration, three-dimensional landmark localization, and for generating panoramic stitched views in simulated intraoperative applications. The system was able to track the center point of the gantry with an accuracy of [Formula: see text] mm or better. Accuracies of 2D-3D registrations were [Formula: see text] mm and [Formula: see text]. Three-dimensional landmark localization had an accuracy of [Formula: see text] of the length (or [Formula: see text] mm) on average, depending on whether the landmarks were located along, above, or across the table. The overall accuracies of the two-dimensional measurements conducted on stitched panoramic images of the femur and lumbar spine were 2.5 [Formula: see text] 2.0 % [Formula: see text] and [Formula: see text], respectively. The TC-arm system has the potential to achieve sophisticated quantitative fluoroscopy assessment capabilities using an existing C-arm imaging system. This technology may be useful to improve the quality of orthopedic surgery and interventional radiology.
de-Azevedo-Vaz, Sergio Lins; Oenning, Anne Caroline Costa; Felizardo, Marcela Graciano; Haiter-Neto, Francisco; de Freitas, Deborah Queiroz
2015-04-01
The objective of this study is to assess the accuracy of the vertical tube shift method in identifying the relationship between the mandibular canal (MC) and third molars. Two examiners assessed image sets of 173 lower third molar roots (55 patients) using forced consensus. The image sets comprised two methods: PERI, two periapical radiographs (taken at 0° and -30°), and PAN, a panoramic radiograph (vertical angulation of -8°) and a periapical radiograph taken at a vertical angulation of -30°. Cone beam computed tomography (CBCT) was the reference standard in the study. The responses were recorded for position (buccal, in-line with apex and lingual) and contact (present or absent). The McNemar-Bowker and McNemar tests were used to determine if the PERI and PAN methods would disagree with the reference standard (α = 5 %). The PERI and PAN methods disagreed with the reference standard for both position and contact (p < 0.05). The vertical tube shift method was not accurate in determining the relationship between lower third molars and the MC. The vertical tube shift is not a reliable method for predicting the relationship between lower third molars and the MC.
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
Tritium internal dose estimation from measurements with liquid scintillators.
Pántya, A; Dálnoki, Á; Imre, A R; Zagyvai, P; Pázmándi, T
2018-07-01
Tritium may exist in several chemical and physical forms in workplaces, common occurrences are in vapor or liquid form (as tritiated water) and in organic form (e.g. thymidine) which can get into the body by inhalation or by ingestion. For internal dose assessment it is usually assumed that urine samples for tritium analysis are obtained after the tritium concentration inside the body has reached equilibrium following intake. Comparison was carried out for two types of vials, two efficiency calculation methods and two available liquid scintillation devices to highlight the errors of the measurements. The results were used for dose estimation with MONDAL-3 software. It has been shown that concerning the accuracy of the final internal dose assessment, the uncertainties of the assumptions used in the dose assessment (for example the date and route of intake, the physical and chemical form) can be more influential than the errors of the measured data. Therefore, the improvement of the experimental accuracy alone is not the proper way to improve the accuracy of the internal dose estimation. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, K.; Cheng, D. B.; He, J. J.; Zhao, Y. L.
2018-02-01
Collapse gully erosion is a specific type of soil erosion in the red soil region of southern China, and early warning and prevention of the occurrence of collapse gully erosion is very important. Based on the idea of risk assessment, this research, taking Guangdong province as an example, adopt the information acquisition analysis and the logistic regression analysis, to discuss the feasibility for collapse gully erosion risk assessment in regional scale, and compare the applicability of the different risk assessment methods. The results show that in the Guangdong province, the risk degree of collapse gully erosion occurrence is high in northeastern and western area, and relatively low in southwestern and central part. The comparing analysis of the different risk assessment methods on collapse gully also indicated that the risk distribution patterns from the different methods were basically consistent. However, the accuracy of risk map from the information acquisition analysis method was slightly better than that from the logistic regression analysis method.
Alternative method to validate the seasonal land cover regions of the conterminous United States
Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan
1996-01-01
An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...
In this study we have developed a novel method to estimate in vivo rates of metabolism in unanesthetized fish. This method provides a basis for evaluating the accuracy of in vitro-in vivo metabolism extrapolations. As such, this research will lead to improved risk assessments f...
ERIC Educational Resources Information Center
Zhu, Mingjing; Urhahne, Detlef
2014-01-01
The present study examines the accuracy of teachers' judgements about students' motivation and emotions in English learning with two different rating methods. A sample of 480 sixth-grade Chinese students reported their academic self-concept, learning effort, enjoyment, and test anxiety via a questionnaire and were rated on these dimensions by…
Reducing trial length in force platform posturographic sleep deprivation measurements
NASA Astrophysics Data System (ADS)
Forsman, P.; Hæggström, E.; Wallin, A.
2007-09-01
Sleepiness correlates with sleep-related accidents, but convenient tests for sleepiness monitoring are scarce. The posturographic test is a method to assess balance, and this paper describes one phase of the development of a posturographic sleepiness monitoring method. We investigated the relationship between trial length and accuracy of the posturographic time-awake (TA) estimate. Twenty-one healthy adults were kept awake for 32 h and their balance was recorded, 16 times with 30 s trials, as a function of TA. The balance was analysed with regards to fractal dimension, most common sway amplitude and time interval for open-loop stance control. While a 30 s trial allows estimating the TA of individual subjects with better than 5 h accuracy, repeating the analysis using shorter trial lengths showed that 18 s sufficed to achieve the targeted 5 h accuracy. Moreover, it was found that with increasing TA, the posturographic parameters estimated the subjects' TA more accurately.
Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.
Measures of model performance based on the log accuracy ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Measures of model performance based on the log accuracy ratio
Morley, Steven Karl; Brito, Thiago Vasconcelos; Welling, Daniel T.
2018-01-03
Quantitative assessment of modeling and forecasting of continuous quantities uses a variety of approaches. We review existing literature describing metrics for forecast accuracy and bias, concentrating on those based on relative errors and percentage errors. Of these accuracy metrics, the mean absolute percentage error (MAPE) is one of the most common across many fields and has been widely applied in recent space science literature and we highlight the benefits and drawbacks of MAPE and proposed alternatives. We then introduce the log accuracy ratio, and derive from it two metrics: the median symmetric accuracy; and the symmetric signed percentage bias. Robustmore » methods for estimating the spread of a multiplicative linear model using the log accuracy ratio are also presented. The developed metrics are shown to be easy to interpret, robust, and to mitigate the key drawbacks of their more widely-used counterparts based on relative errors and percentage errors. Their use is illustrated with radiation belt electron flux modeling examples.« less
Four years of Landsat-7 on-orbit geometric calibration and performance
Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.
2004-01-01
Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.
Bean, Melanie K; Raynor, Hollie A; Thornton, Laura M; Sova, Alexandra; Dunne Stewart, Mary; Mazzeo, Suzanne E
2018-04-12
Scientifically sound methods for investigating dietary consumption patterns from self-serve salad bars are needed to inform school policies and programs. To examine the reliability and validity of digital imagery for determining starting portions and plate waste of self-serve salad bar vegetables (which have variable starting portions) compared with manual weights. In a laboratory setting, 30 mock salads with 73 vegetables were made, and consumption was simulated. Each component (initial and removed portion) was weighed; photographs of weighed reference portions and pre- and post-consumption mock salads were taken. Seven trained independent raters visually assessed images to estimate starting portions to the nearest ¼ cup and percentage consumed in 20% increments. These values were converted to grams for comparison with weighed values. Intraclass correlations between weighed and digital imagery-assessed portions and plate waste were used to assess interrater reliability and validity. Pearson's correlations between weights and digital imagery assessments were also examined. Paired samples t tests were used to evaluate mean differences (in grams) between digital imagery-assessed portions and measured weights. Interrater reliabilities were excellent for starting portions and plate waste with digital imagery. For accuracy, intraclass correlations were moderate, with lower accuracy for determining starting portions of leafy greens compared with other vegetables. However, accuracy of digital imagery-assessed plate waste was excellent. Digital imagery assessments were not significantly different from measured weights for estimating overall vegetable starting portions or waste; however, digital imagery assessments slightly underestimated starting portions (by 3.5 g) and waste (by 2.1 g) of leafy greens. This investigation provides preliminary support for use of digital imagery in estimating starting portions and plate waste from school salad bars. Results might inform methods used in empirical investigations of dietary intake in schools with self-serve salad bars. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Building damage assessment from PolSAR data using texture parameters of statistical model
NASA Astrophysics Data System (ADS)
Li, Linlin; Liu, Xiuguo; Chen, Qihao; Yang, Shuai
2018-04-01
Accurate building damage assessment is essential in providing decision support for disaster relief and reconstruction. Polarimetric synthetic aperture radar (PolSAR) has become one of the most effective means of building damage assessment, due to its all-day/all-weather ability and richer backscatter information of targets. However, intact buildings that are not parallel to the SAR flight pass (termed oriented buildings) and collapsed buildings share similar scattering mechanisms, both of which are dominated by volume scattering. This characteristic always leads to misjudgments between assessments of collapsed buildings and oriented buildings from PolSAR data. Because the collapsed buildings and the intact buildings (whether oriented or parallel buildings) have different textures, a novel building damage assessment method is proposed in this study to address this problem by introducing texture parameters of statistical models. First, the logarithms of the estimated texture parameters of different statistical models are taken as a new texture feature to describe the collapse of the buildings. Second, the collapsed buildings and intact buildings are distinguished using an appropriate threshold. Then, the building blocks are classified into three levels based on the building block collapse rate. Moreover, this paper also discusses the capability for performing damage assessment using texture parameters from different statistical models or using different estimators. The RADARSAT-2 and ALOS-1 PolSAR images are used to present and analyze the performance of the proposed method. The results show that using the texture parameters avoids the problem of confusing collapsed and oriented buildings and improves the assessment accuracy. The results assessed by using the K/G0 distribution texture parameters estimated based on the second moment obtain the highest extraction accuracies. For the RADARSAT-2 and ALOS-1 data, the overall accuracy (OA) for these three types of buildings is 73.39% and 68.45%, respectively.
Analysis of Sampling Methodologies for Noise Pollution Assessment and the Impact on the Population.
Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel
2016-05-11
Today, noise pollution is an increasing environmental stressor. Noise maps are recognised as the main tool for assessing and managing environmental noise, but their accuracy largely depends on the sampling method used. The sampling methods most commonly used by different researchers (grid, legislative road types and categorisation methods) were analysed and compared using the city of Talca (Chile) as a test case. The results show that the stratification of sound values in road categories has a significantly lower prediction error and a higher capacity for discrimination and prediction than in the legislative road types used by the Ministry of Transport and Telecommunications in Chile. Also, the use of one or another method implies significant differences in the assessment of population exposure to noise pollution. Thus, the selection of a suitable method for performing noise maps through measurements is essential to achieve an accurate assessment of the impact of noise pollution on the population.
Boushey, Carol J; Spoden, Melissa; Delp, Edward J; Zhu, Fengqing; Bosch, Marc; Ahmad, Ziad; Shvetsov, Yurii B; DeLany, James P; Kerr, Deborah A
2017-03-22
The mobile Food Record (mFR) is an image-based dietary assessment method for mobile devices. The study primary aim was to test the accuracy of the mFR by comparing reported energy intake (rEI) to total energy expenditure (TEE) using the doubly labeled water (DLW) method. Usability of the mFR was assessed by questionnaires before and after the study. Participants were 45 community dwelling men and women, 21-65 years. They were provided pack-out meals and snacks and encouraged to supplement with usual foods and beverages not provided. After being dosed with DLW, participants were instructed to record all eating occasions over a 7.5 days period using the mFR. Three trained analysts estimated rEI from the images sent to a secure server. rEI and TEE correlated significantly (Spearman correlation coefficient of 0.58, p < 0.0001). The mean percentage of underreporting below the lower 95% confidence interval of the ratio of rEI to TEE was 12% for men (standard deviation (SD) ± 11%) and 10% for women (SD ± 10%). The results demonstrate the accuracy of the mFR is comparable to traditional dietary records and other image-based methods. No systematic biases could be found. The mFR was received well by the participants and usability was rated as easy.
Booth, Robert K.; Hotchkiss, Sara C.; Wilcox, Douglas A.
2005-01-01
Summary: 1. Discoloration of polyvinyl chloride (PVC) tape has been used in peatland ecological and hydrological studies as an inexpensive way to monitor changes in water-table depth and reducing conditions. 2. We investigated the relationship between depth of PVC tape discoloration and measured water-table depth at monthly time steps during the growing season within nine kettle peatlands of northern Wisconsin. Our specific objectives were to: (1) determine if PVC discoloration is an accurate method of inferring water-table depth in Sphagnum-dominated kettle peatlands of the region; (2) assess seasonal variability in the accuracy of the method; and (3) determine if systematic differences in accuracy occurred among microhabitats, PVC tape colour and peatlands. 3. Our results indicated that PVC tape discoloration can be used to describe gradients of water-table depth in kettle peatlands. However, accuracy differed among the peatlands studied, and was systematically biased in early spring and late summer/autumn. Regardless of the month when the tape was installed, the highest elevations of PVC tape discoloration showed the strongest correlation with midsummer (around July) water-table depth and average water-table depth during the growing season. 4. The PVC tape discoloration method should be used cautiously when precise estimates are needed of seasonal changes in the water-table.
Lucano, Elena; Liberti, Micaela; Mendoza, Gonzalo G.; Lloyd, Tom; Iacono, Maria Ida; Apollonio, Francesca; Wedan, Steve; Kainz, Wolfgang; Angelone, Leonardo M.
2016-01-01
Goal This study aims at a systematic assessment of five computational models of a birdcage coil for magnetic resonance imaging (MRI) with respect to accuracy and computational cost. Methods The models were implemented using the same geometrical model and numerical algorithm, but different driving methods (i.e., coil “defeaturing”). The defeatured models were labeled as: specific (S2), generic (G32, G16), and hybrid (H16, H16fr-forced). The accuracy of the models was evaluated using the “Symmetric Mean Absolute Percentage Error” (“SMAPE”), by comparison with measurements in terms of frequency response, as well as electric (||E⃗||) and magnetic (||B⃗||) field magnitude. Results All the models computed the ||B⃗|| within 35 % of the measurements, only the S2, G32, and H16 were able to accurately model the ||E⃗|| inside the phantom with a maximum SMAPE of 16 %. Outside the phantom, only the S2 showed a SMAPE lower than 11 %. Conclusions Results showed that assessing the accuracy of ||B⃗|| based only on comparison along the central longitudinal line of the coil can be misleading. Generic or hybrid coils – when properly modeling the currents along the rings/rungs – were sufficient to accurately reproduce the fields inside a phantom while a specific model was needed to accurately model ||E⃗|| in the space between coil and phantom. Significance Computational modeling of birdcage body coils is extensively used in the evaluation of RF-induced heating during MRI. Experimental validation of numerical models is needed to determine if a model is an accurate representation of a physical coil. PMID:26685220
a Comparison of Empirical and Inteligent Methods for Dust Detection Using Modis Satellite Data
NASA Astrophysics Data System (ADS)
Shahrisvand, M.; Akhoondzadeh, M.
2013-09-01
Nowadays, dust storm in one of the most important natural hazards which is considered as a national concern in scientific communities. This paper considers the capabilities of some classical and intelligent methods for dust detection from satellite imagery around the Middle East region. In the study of dust detection, MODIS images have been a good candidate due to their suitable spectral and temporal resolution. In this study, physical-based and intelligent methods including decision tree, ANN (Artificial Neural Network) and SVM (Support Vector Machine) have been applied to detect dust storms. Among the mentioned approaches, in this paper, SVM method has been implemented for the first time in domain of dust detection studies. Finally, AOD (Aerosol Optical Depth) images, which are one the referenced standard products of OMI (Ozone Monitoring Instrument) sensor, have been used to assess the accuracy of all the implemented methods. Since the SVM method can distinguish dust storm over lands and oceans simultaneously, therefore the accuracy of SVM method is achieved better than the other applied approaches. As a conclusion, this paper shows that SVM can be a powerful tool for production of dust images with remarkable accuracy in comparison with AOT (Aerosol Optical Thickness) product of NASA.
Spittle, Alicia J.; Lee, Katherine J.; Spencer-Smith, Megan; Lorefice, Lucy E.; Anderson, Peter J.; Doyle, Lex W.
2015-01-01
Aim The primary aim of this study was to investigate the accuracy of the Alberta Infant Motor Scale (AIMS) and Neuro-Sensory Motor Developmental Assessment (NSMDA) over the first year of life for predicting motor impairment at 4 years in preterm children. The secondary aims were to assess the predictive value of serial assessments over the first year and when using a combination of these two assessment tools in follow-up. Method Children born <30 weeks’ gestation were prospectively recruited and assessed at 4, 8 and 12 months’ corrected age using the AIMS and NSMDA. At 4 years’ corrected age children were assessed for cerebral palsy (CP) and motor impairment using the Movement Assessment Battery for Children 2nd-edition (MABC-2). We calculated accuracy of the AIMS and NSMDA for predicting CP and MABC-2 scores ≤15th (at-risk of motor difficulty) and ≤5th centile (significant motor difficulty) for each test (AIMS and NSMDA) at 4, 8 and 12 months, for delay on one, two or all three of the time points over the first year, and finally for delay on both tests at each time point. Results Accuracy for predicting motor impairment was good for each test at each age, although false positives were common. Motor impairment on the MABC-2 (scores ≤5th and ≤15th) was most accurately predicted by the AIMS at 4 months, whereas CP was most accurately predicted by the NSMDA at 12 months. In regards to serial assessments, the likelihood ratio for motor impairment increased with the number of delayed assessments. When combining both the NSMDA and AIMS the best accuracy was achieved at 4 months, although results were similar at 8 and 12 months. Interpretation Motor development during the first year of life in preterm infants assessed with the AIMS and NSMDA is predictive of later motor impairment at preschool age. However, false positives are common and therefore it is beneficial to follow-up children at high risk of motor impairment at more than one time point, or to use a combination of assessment tools. Trial Registration ACTR.org.au ACTRN12606000252516 PMID:25970619
Reproducibility of abdominal fat assessment by ultrasound and computed tomography
Mauad, Fernando Marum; Chagas-Neto, Francisco Abaeté; Benedeti, Augusto César Garcia Saab; Nogueira-Barbosa, Marcello Henrique; Muglia, Valdair Francisco; Carneiro, Antonio Adilton Oliveira; Muller, Enrico Mattana; Elias Junior, Jorge
2017-01-01
Objective: To test the accuracy and reproducibility of ultrasound and computed tomography (CT) for the quantification of abdominal fat in correlation with the anthropometric, clinical, and biochemical assessments. Materials and Methods: Using ultrasound and CT, we determined the thickness of subcutaneous and intra-abdominal fat in 101 subjects-of whom 39 (38.6%) were men and 62 (61.4%) were women-with a mean age of 66.3 years (60-80 years). The ultrasound data were correlated with the anthropometric, clinical, and biochemical parameters, as well as with the areas measured by abdominal CT. Results: Intra-abdominal thickness was the variable for which the correlation with the areas of abdominal fat was strongest (i.e., the correlation coefficient was highest). We also tested the reproducibility of ultrasound and CT for the assessment of abdominal fat and found that CT measurements of abdominal fat showed greater reproducibility, having higher intraobserver and interobserver reliability than had the ultrasound measurements. There was a significant correlation between ultrasound and CT, with a correlation coefficient of 0.71. Conclusion: In the assessment of abdominal fat, the intraobserver and interobserver reliability were greater for CT than for ultrasound, although both methods showed high accuracy and good reproducibility. PMID:28670024
Gujral, Rajinder Singh; Haque, Sk Manirul
2010-01-01
A simple and sensitive UV spectrophotometric method was developed and validated for the simultaneous determination of Potassium Clavulanate (PC) and Amoxicillin Trihydrate (AT) in bulk, pharmaceutical formulations and in human urine samples. The method was linear in the range of 0.2–8.5 μg/ml for PC and 6.4–33.6 μg/ml for AT. The absorbance was measured at 205 and 271 nm for PC and AT respectively. The method was validated with respect to accuracy, precision, specificity, ruggedness, robustness, limit of detection and limit of quantitation. This method was used successfully for the quality assessment of four PC and AT drug products and in human urine samples with good precision and accuracy. This is found to be simple, specific, precise, accurate, reproducible and low cost UV Spectrophotometric method. PMID:23675211
Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.
Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik
2011-01-01
Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.
Skin Testing for Allergic Rhinitis: A Health Technology Assessment
Kabali, Conrad; Chan, Brian; Higgins, Caroline; Holubowich, Corinne
2016-01-01
Background Allergic rhinitis is the most common type of allergy worldwide. The accuracy of skin testing for allergic rhinitis is still debated. This health technology assessment had two objectives: to determine the diagnostic accuracy of skin-prick and intradermal testing in patients with suspected allergic rhinitis and to estimate the costs to the Ontario health system of skin testing for allergic rhinitis. Methods We searched All Ovid MEDLINE, Embase, and Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, CRD Health Technology Assessment Database, Cochrane Central Register of Controlled Trials, and NHS Economic Evaluation Database for studies that evaluated the diagnostic accuracy of skin-prick and intradermal testing for allergic rhinitis using nasal provocation as the reference standard. For the clinical evidence review, data extraction and quality assessment were performed using the QUADAS-2 tool. We used the bivariate random-effects model for meta-analysis. For the economic evidence review, we assessed studies using a modified checklist developed by the (United Kingdom) National Institute for Health and Care Excellence. We estimated the annual cost of skin testing for allergic rhinitis in Ontario for 2015 to 2017 using provincial data on testing volumes and costs. Results We meta-analyzed seven studies with a total of 430 patients that assessed the accuracy of skin-prick testing. The pooled pair of sensitivity and specificity for skin-prick testing was 85% and 77%, respectively. We did not perform a meta-analysis for the diagnostic accuracy of intradermal testing due to the small number of studies (n = 4). Of these, two evaluated the accuracy of intradermal testing in confirming negative skin-prick testing results, with sensitivity ranging from 27% to 50% and specificity ranging from 60% to 100%. The other two studies evaluated the accuracy of intradermal testing as a stand-alone tool for diagnosing allergic rhinitis, with sensitivity ranging from 60% to 79% and specificity ranging from 68% to 69%. We estimated the budget impact of continuing to publicly fund skin testing for allergic rhinitis in Ontario to be between $2.5 million and $3.0 million per year. Conclusions Skin-prick testing is moderately accurate in identifying subjects with or without allergic rhinitis. The diagnostic accuracy of intradermal testing could not be well established from this review. Our best estimate is that publicly funding skin testing for allergic rhinitis costs the Ontario government approximately $2.5 million to $3.0 million per year. PMID:27279928
Combining accuracy assessment of land-cover maps with environmental monitoring programs
Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.
2000-01-01
A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.
Noise pollution mapping approach and accuracy on landscape scales.
Iglesias Merchan, Carlos; Diaz-Balteiro, Luis
2013-04-01
Noise mapping allows the characterization of environmental variables, such as noise pollution or soundscape, depending on the task. Strategic noise mapping (as per Directive 2002/49/EC, 2002) is a tool intended for the assessment of noise pollution at the European level every five years. These maps are based on common methods and procedures intended for human exposure assessment in the European Union that could be also be adapted for assessing environmental noise pollution in natural parks. However, given the size of such areas, there could be an alternative approach to soundscape characterization rather than using human noise exposure procedures. It is possible to optimize the size of the mapping grid used for such work by taking into account the attributes of the area to be studied and the desired outcome. This would then optimize the mapping time and the cost. This type of optimization is important in noise assessment as well as in the study of other environmental variables. This study compares 15 models, using different grid sizes, to assess the accuracy of the noise mapping of the road traffic noise at a landscape scale, with respect to noise and landscape indicators. In a study area located in the Manzanares High River Basin Regional Park in Spain, different accuracy levels (Kappa index values from 0.725 to 0.987) were obtained depending on the terrain and noise source properties. The time taken for the calculations and the noise mapping accuracy results reveal the potential for setting the map resolution in line with decision-makers' criteria and budget considerations. Copyright © 2013 Elsevier B.V. All rights reserved.
Diagnostic accuracy of different caries risk assessment methods. A systematic review.
Senneby, Anna; Mejàre, Ingegerd; Sahlin, Nils-Eric; Svensäter, Gunnel; Rohlin, Madeleine
2015-12-01
To evaluate the accuracy of different methods used to identify individuals with increased risk of developing dental coronal caries. Studies on following methods were included: previous caries experience, tests using microbiota, buffering capacity, salivary flow rate, oral hygiene, dietary habits and sociodemographic variables. QUADAS-2 was used to assess risk of bias. Sensitivity, specificity, predictive values, and likelihood ratios (LR) were calculated. Quality of evidence based on ≥3 studies of a method was rated according to GRADE. PubMed, Cochrane Library, Web of Science and reference lists of included publications were searched up to January 2015. From 5776 identified articles, 18 were included. Assessment of study quality identified methodological limitations concerning study design, test technology and reporting. No study presented low risk of bias in all domains. Three or more studies were found only for previous caries experience and salivary mutans streptococci and quality of evidence for these methods was low. Evidence regarding other methods was lacking. For previous caries experience, sensitivity ranged between 0.21 and 0.94 and specificity between 0.20 and 1. Tests using salivary mutans streptococci resulted in low sensitivity and high specificity. For children with primary teeth at baseline, pooled LR for a positive test was 3 for previous caries experience and 4 for salivary mutans streptococci, given a threshold ≥10(5) CFU/ml. Evidence on the validity of analysed methods used for caries risk assessment is limited. As methodological quality was low, there is a need to improve study design. Low validity for the analysed methods may lead to patients with increased risk not being identified, whereas some are falsely identified as being at risk. As caries risk assessment guides individualized decisions on interventions and intervals for patient recall, improved performance based on best evidence is greatly needed. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugam, Sankar; Xing Aitang; Jameson, Michael G.
2013-03-15
Purpose: Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top. Methods: The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone,more » seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system. Results: The phantom positioning was found to be within 0.04 ({sigma}= 0.12) Degree-Sign , 0.02 ({sigma}= 0.13) Degree-Sign , and -0.03 ({sigma}= 0.06) Degree-Sign in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of -0.2({sigma}= 0.4) mm in translational and -0.1({sigma}= 0.1) Degree-Sign in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of -0.3({sigma}= 0.9) mm and -1.4({sigma}= 1.7) Degree-Sign in translational and rotational directions over low resolution images, and this is reduced to -0.1({sigma}= 0.2) mm and -0.1({sigma}= 0.79) Degree-Sign using high resolution images. Conclusions: The phantom, capable of rotating independently about three orthogonal axes was successfully used to assess the accuracy of an IGRT system considering 6 degrees of freedom. The overall residual error in the image guidance process of XVI in combination with the HexaPOD couch was demonstrated to be less than 0.3 mm and 0.3 Degree-Sign in translational and rotational directions when using the gray value registration with high resolution CBCT images. However, the residual error, especially in rotational directions, may increase when the seed registration is used with low resolution images.« less
Bauer, Lyndsey; O'Bryant, Sid E; Lynch, Julie K; McCaffrey, Robert J; Fisher, Jerid M
2007-09-01
Assessing effort level during neuropsychological evaluations is critical to support the accuracy of cognitive test scores. Many instruments are designed to measure effort, yet they are not routinely administered in neuropsychological assessments. The Test of Memory Malingering (TOMM) and the Word Memory Test (WMT) are commonly administered symptom validity tests with sound psychometric properties. This study examines the use of the TOMM Trial 1 and the WMT Immediate Recognition (IR) trial scores as brief screening tools for insufficient effort through an archival analysis of a combined sample of mild head-injury litigants ( N = 105) who were assessed in forensic private practices. Results show that both demonstrate impressive diagnostic accuracy and calculations of positive and negative predictive power are presented for a range of base rates. These results support the utility of Trial 1 of the TOMM and the WMT IR trial as screening methods for the assessment of insufficient effort in neuropsychological assessments.
Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.; Polly, B.
2011-12-01
This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less
Classification of urban features using airborne hyperspectral data
NASA Astrophysics Data System (ADS)
Ganesh Babu, Bharath
Accurate mapping and modeling of urban environments are critical for their efficient and successful management. Superior understanding of complex urban environments is made possible by using modern geospatial technologies. This research focuses on thematic classification of urban land use and land cover (LULC) using 248 bands of 2.0 meter resolution hyperspectral data acquired from an airborne imaging spectrometer (AISA+) on 24th July 2006 in and near Terre Haute, Indiana. Three distinct study areas including two commercial classes, two residential classes, and two urban parks/recreational classes were selected for classification and analysis. Four commonly used classification methods -- maximum likelihood (ML), extraction and classification of homogeneous objects (ECHO), spectral angle mapper (SAM), and iterative self organizing data analysis (ISODATA) - were applied to each data set. Accuracy assessment was conducted and overall accuracies were compared between the twenty four resulting thematic maps. With the exception of SAM and ISODATA in a complex commercial area, all methods employed classified the designated urban features with more than 80% accuracy. The thematic classification from ECHO showed the best agreement with ground reference samples. The residential area with relatively homogeneous composition was classified consistently with highest accuracy by all four of the classification methods used. The average accuracy amongst the classifiers was 93.60% for this area. When individually observed, the complex recreational area (Deming Park) was classified with the highest accuracy by ECHO, with an accuracy of 96.80% and 96.10% Kappa. The average accuracy amongst all the classifiers was 92.07%. The commercial area with relatively high complexity was classified with the least accuracy by all classifiers. The lowest accuracy was achieved by SAM at 63.90% with 59.20% Kappa. This was also the lowest accuracy in the entire analysis. This study demonstrates the potential for using the visible and near infrared (VNIR) bands from AISA+ hyperspectral data in urban LULC classification. Based on their performance, the need for further research using ECHO and SAM is underscored. The importance incorporating imaging spectrometer data in high resolution urban feature mapping is emphasized.
Ball, Sarah C; Benjamin, Sara E; Ward, Dianne S
2007-04-01
To our knowledge, a direct observation protocol for assessing dietary intake among young children in child care has not been published. This article reviews the development and testing of a diet observation system for child care facilities that occurred during a larger intervention trial. Development of this system was divided into five phases, done in conjunction with a larger intervention study; (a) protocol development, (b) training of field staff, (c) certification of field staff in a laboratory setting, (d) implementation in a child-care setting, and (e) certification of field staff in a child-care setting. During the certification phases, methods were used to assess the accuracy and reliability of all observers at estimating types and amounts of food and beverages commonly served in child care. Tests of agreement show strong agreement among five observers, as well as strong accuracy between the observers and 20 measured portions of foods and beverages with a mean intraclass correlation coefficient value of 0.99. This structured observation system shows promise as a valid and reliable approach for assessing dietary intake of children in child care and makes a valuable contribution to the growing body of literature on the dietary assessment of young children.
2017-01-01
Objective To compare the diagnostic accuracy of transvaginal ultrasound (TVS) and magnetic resonance imaging (MRI) for detecting myometrial infiltration (MI) in endometrial carcinoma. Methods An extensive search of papers comparing TVS and MRI in assessing MI in endometrial cancer was performed in MEDLINE (PubMed), Web of Science, and Cochrane Database from January 1989 to January 2017. Quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Results Our extended search identified 747 citations but after exclusions we finally included in the meta-analysis 8 articles. The risk of bias for most studies was low for most 4 domains assessed in QUADAS-2. Overall, pooled estimated sensitivity and specificity for diagnosing deep MI were 75% (95% confidence interval [CI]=67%–82%) and 82% (95% CI=75%–93%) for TVS, and 83% (95% CI=76%–89%) and 82% (95% CI=72%–89%) for MRI, respectively. No statistical differences were found when comparing both methods (p=0.314). Heterogeneity was low for sensitivity and high for specificity for TVS and MRI. Conclusion MRI showed a better sensitivity than TVS for detecting deep MI in women with endometrial cancer. However, the difference observed was not statistically significant. PMID:29027404
Empirical evaluation of data normalization methods for molecular classification.
Huang, Huei-Chung; Qin, Li-Xuan
2018-01-01
Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.
NASA Astrophysics Data System (ADS)
Wang, X.; Xu, L.
2018-04-01
One of the most important applications of remote sensing classification is water extraction. The water index (WI) based on Landsat images is one of the most common ways to distinguish water bodies from other land surface features. But conventional WI methods take into account spectral information only form a limited number of bands, and therefore the accuracy of those WI methods may be constrained in some areas which are covered with snow/ice, clouds, etc. An accurate and robust water extraction method is the key to the study at present. The support vector machine (SVM) using all bands spectral information can reduce for these classification error to some extent. Nevertheless, SVM which barely considers spatial information is relatively sensitive to noise in local regions. Conditional random field (CRF) which considers both spatial information and spectral information has proven to be able to compensate for these limitations. Hence, in this paper, we develop a systematic water extraction method by taking advantage of the complementarity between the SVM and a water index-guided stochastic fully-connected conditional random field (SVM-WIGSFCRF) to address the above issues. In addition, we comprehensively evaluate the reliability and accuracy of the proposed method using Landsat-8 operational land imager (OLI) images of one test site. We assess the method's performance by calculating the following accuracy metrics: Omission Errors (OE) and Commission Errors (CE); Kappa coefficient (KP) and Total Error (TE). Experimental results show that the new method can improve target detection accuracy under complex and changeable environments.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Duncan C. Lutes
2002-01-01
The line transect method, its underlying assumptions, and the spatial patterning of down and standing pieces of dead wood were examined at the Tenderfoot Creek Experimental Forest in central Montana. The accuracy of the line transect method was not determined due to conflicting results of t-tests and ordinary least squares regression. In most instances down pieces were...
Fuller, Douglas O; Parenti, Michael S; Gad, Adel M; Beier, John C
2012-01-01
Irrigation along the Nile River has resulted in dramatic changes in the biophysical environment of Upper Egypt. In this study we used a combination of MODIS 250 m NDVI data and Landsat imagery to identify areas that changed from 2001-2008 as a result of irrigation and water-level fluctuations in the Nile River and nearby water bodies. We used two different methods of time series analysis -- principal components (PCA) and harmonic decomposition (HD), applied to the MODIS 250 m NDVI images to derive simple three-class land cover maps and then assessed their accuracy using a set of reference polygons derived from 30 m Landsat 5 and 7 imagery. We analyzed our MODIS 250 m maps against a new MODIS global land cover product (MOD12Q1 collection 5) to assess whether regionally specific mapping approaches are superior to a standard global product. Results showed that the accuracy of the PCA-based product was greater than the accuracy of either the HD or MOD12Q1 products for the years 2001, 2003, and 2008. However, the accuracy of the PCA product was only slightly better than the MOD12Q1 for 2001 and 2003. Overall, the results suggest that our PCA-based approach produces a high level of user and producer accuracies, although the MOD12Q1 product also showed consistently high accuracy. Overlay of 2001-2008 PCA-based maps showed a net increase of 12 129 ha of irrigated vegetation, with the largest increase found from 2006-2008 around the Districts of Edfu and Kom Ombo. This result was unexpected in light of ambitious government plans to develop 336 000 ha of irrigated agriculture around the Toshka Lakes.
Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment
Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...
Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano
2016-09-13
We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.
Feng, Yong; Chen, Aiqing
2017-01-01
This study aimed to quantify blood pressure (BP) measurement accuracy and variability with different techniques. Thirty video clips of BP recordings from the BHS training database were converted to Korotkoff sound waveforms. Ten observers without receiving medical training were asked to determine BPs using (a) traditional manual auscultatory method and (b) visual auscultation method by visualizing the Korotkoff sound waveform, which was repeated three times on different days. The measurement error was calculated against the reference answers, and the measurement variability was calculated from the SD of the three repeats. Statistical analysis showed that, in comparison with the auscultatory method, visual method significantly reduced overall variability from 2.2 to 1.1 mmHg for SBP and from 1.9 to 0.9 mmHg for DBP (both p < 0.001). It also showed that BP measurement errors were significant for both techniques (all p < 0.01, except DBP from the traditional method). Although significant, the overall mean errors were small (−1.5 and −1.2 mmHg for SBP and −0.7 and 2.6 mmHg for DBP, resp., from the traditional auscultatory and visual auscultation methods). In conclusion, the visual auscultation method had the ability to achieve an acceptable degree of BP measurement accuracy, with smaller variability in comparison with the traditional auscultatory method. PMID:29423405
Bailey, Timothy S.; Klaff, Leslie J.; Wallace, Jane F.; Greene, Carmine; Pardo, Scott; Harrison, Bern; Simmons, David A.
2016-01-01
Background: As blood glucose monitoring system (BGMS) accuracy is based on comparison of BGMS and laboratory reference glucose analyzer results, reference instrument accuracy is important to discriminate small differences between BGMS and reference glucose analyzer results. Here, we demonstrate the important role of reference glucose analyzer accuracy in BGMS accuracy evaluations. Methods: Two clinical studies assessed the performance of a new BGMS, using different reference instrument procedures. BGMS and YSI analyzer results were compared for fingertip blood that was obtained by untrained subjects’ self-testing and study staff testing, respectively. YSI analyzer accuracy was monitored using traceable serum controls. Results: In study 1 (N = 136), 94.1% of BGMS results were within International Organization for Standardization (ISO) 15197:2013 accuracy criteria; YSI analyzer serum control results showed a negative bias (−0.64% to −2.48%) at the first site and a positive bias (3.36% to 6.91%) at the other site. In study 2 (N = 329), 97.8% of BGMS results were within accuracy criteria; serum controls showed minimal bias (<0.92%) at both sites. Conclusions: These findings suggest that the ability to demonstrate that a BGMS meets accuracy guidelines is influenced by reference instrument accuracy. PMID:26902794
Accuracy of piezoelectric pedometer and accelerometer step counts.
Cruz, Joana; Brooks, Dina; Marques, Alda
2017-04-01
This study aimed to assess step-count accuracy of a piezoeletric pedometer (Yamax PW/EX-510), when worn at different body parts, and a triaxial accelerometer (GT3X+), and to compare device accuracy; and identify the preferred location(s) to wear a pedometer. Sixty-three healthy adults (45.8±20.6 years old) wore 7 pedometers (neck, lateral right and left of the waist, front right and left of the waist, front pockets of the trousers) and 1 accelerometer (over the right hip), while walking 120 m at slow, self-preferred/normal and fast paces. Steps were recorded. Participants identified their preferred location(s) to wear the pedometer. Absolute percent error (APE) and Bland and Altman (BA) method were used to assess device accuracy (criterion measure: manual counts) and BA method for device comparisons. Pedometer APE was below 3% at normal and fast paces despite wearing location, but higher at slow pace (4.5-9.1%). Pedometers were more accurate at the front waist and inside the pockets. Accelerometer APE was higher than pedometer APE (P<0.05); nevertheless, limits of agreement between devices were relatively small. Preferred wearing locations were inside the front right (N.=25) and left (N.=20) pockets of the trousers. Yamax PW/EX-510 pedometers may be preferable than GT3X+ accelerometers to count steps, as they provide more accurate results. These pedometers should be worn at the front right or left positions of the waist or inside the front pockets of the trousers.
Chagpar, Anees B.; Middleton, Lavinia P.; Sahin, Aysegul A.; Dempsey, Peter; Buzdar, Aman U.; Mirza, Attiqa N.; Ames, Fredrick C.; Babiera, Gildy V.; Feig, Barry W.; Hunt, Kelly K.; Kuerer, Henry M.; Meric-Bernstam, Funda; Ross, Merrick I.; Singletary, S Eva
2006-01-01
Objective: To assess the accuracy of physical examination, ultrasonography, and mammography in predicting residual size of breast tumors following neoadjuvant chemotherapy. Background: Neoadjuvant chemotherapy is an accepted part of the management of stage II and III breast cancer. Accurate prediction of residual pathologic tumor size after neoadjuvant chemotherapy is critical in guiding surgical therapy. Although physical examination, ultrasonography, and mammography have all been used to predict residual tumor size, there have been conflicting reports about the accuracy of these methods in the neoadjuvant setting. Methods: We reviewed the records of 189 patients who participated in 1 of 2 protocols using doxorubicin-containing neoadjuvant chemotherapy, and who had assessment by physical examination, ultrasonography, and/or mammography no more than 60 days before their surgical resection. Size correlations were performed using Spearman rho analysis. Clinical and pathologic measurements were also compared categorically using the weighted kappa statistic. Results: Size estimates by physical examination, ultrasonography, and mammography were only moderately correlated with residual pathologic tumor size after neoadjuvant chemotherapy (correlation coefficients: 0.42, 0.42, and 0.41, respectively), with an accuracy of ±1 cm in 66% of patients by physical examination, 75% by ultrasonography, and 70% by mammography. Kappa values (0.24–0.35) indicated poor agreement between clinical and pathologic measurements. Conclusion: Physical examination, ultrasonography, and mammography were only moderately useful for predicting residual pathologic tumor size after neoadjuvant chemotherapy. PMID:16432360
Methods for assessing the quality of data in public health information systems: a critical review.
Chen, Hong; Yu, Ping; Hailey, David; Wang, Ning
2014-01-01
The quality of data in public health information systems can be ensured by effective data quality assessment. In order to conduct effective data quality assessment, measurable data attributes have to be precisely defined. Then reliable and valid measurement methods for data attributes have to be used to measure each attribute. We conducted a systematic review of data quality assessment methods for public health using major databases and well-known institutional websites. 35 studies were eligible for inclusion in the study. A total of 49 attributes of data quality were identified from the literature. Completeness, accuracy and timeliness were the three most frequently assessed attributes of data quality. Most studies directly examined data values. This is complemented by exploring either data users' perception or documentation quality. However, there are limitations of current data quality assessment methods: a lack of consensus on attributes measured; inconsistent definition of the data quality attributes; a lack of mixed methods for assessing data quality; and inadequate attention to reliability and validity. Removal of these limitations is an opportunity for further improvement.
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
Fraysse, François; Thewlis, Dominic
2014-11-07
Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zand, Kevin A.; Shah, Amol; Heba, Elhamy; Wolfson, Tanya; Hamilton, Gavin; Lam, Jessica; Chen, Joshua; Hooker, Jonathan C.; Gamst, Anthony C.; Middleton, Michael S.; Schwimmer, Jeffrey B.; Sirlin, Claude B.
2015-01-01
Purpose To assess accuracy of magnitude-based magnetic resonance imaging (M-MRI) in children to estimate hepatic proton density fat fraction (PDFF) using two to six echoes, with magnetic resonance spectroscopy (MRS)-measured PDFF as a reference standard. Materials and Methods This was an IRB-approved, HIPAA-compliant, single-center, cross-sectional, retrospective analysis of data collected prospectively between 2008 and 2013 in children with known or suspected non-alcoholic fatty liver disease (NAFLD). Two hundred and eighty-six children (8 – 20 [mean 14.2 ± 2.5] yrs; 182 boys) underwent same-day MRS and M-MRI. Unenhanced two-dimensional axial spoiled gradient-recalled-echo images at six echo times were obtained at 3T after a single low-flip-angle (10°) excitation with ≥ 120-ms recovery time. Hepatic PDFF was estimated using the first two, three, four, five, and all six echoes. For each number of echoes, accuracy of M-MRI to estimate PDFF was assessed by linear regression with MRS-PDFF as reference standard. Accuracy metrics were regression intercept, slope, average bias, and R2. Results MRS-PDFF ranged from 0.2 – 40.4% (mean 13.1 ± 9.8%). Using three to six echoes, regression intercept, slope, and average bias were 0.46 – 0.96%, 0.99 – 1.01, and 0.57 – 0.89%, respectively. Using two echoes, these values were 2.98%, 0.97, and 2.72%, respectively. R2 ranged 0.98 – 0.99 for all methods. Conclusion Using three to six echoes, M-MRI has high accuracy for hepatic PDFF estimation in children. PMID:25847512
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Development of ultrasonic methods for hemodynamic measurements
NASA Technical Reports Server (NTRS)
Histand, M. B.; Miller, C. W.; Wells, M. K.; Mcleod, F. D.; Greene, E. R.; Winter, D.
1975-01-01
A transcutanous method to measure instantaneous mean blood flow in peripheral arteries of the human body was defined. Transcutanous and implanted cuff ultrasound velocity measurements were evaluated, and the accuracies of velocity, flow, and diameter measurements were assessed for steady flow. Performance criteria were established for the pulsed Doppler velocity meter (PUDVM), and performance tests were conducted. Several improvements are suggested.
A Comparative Study on Electronic versus Traditional Data Collection in a Special Education Setting
ERIC Educational Resources Information Center
Ruf, Hernan Dennis
2012-01-01
The purpose of the current study was to determine the efficiency of an electronic data collection method compared to a traditional paper-based method in the educational field, in terms of the accuracy of data collected and the time required to do it. In addition, data were collected to assess users' preference and system usability. The study…
USDA-ARS?s Scientific Manuscript database
Structure-from-motion (SfM) photogrammetry from unmanned aircraft system (UAS) imagery is an emerging tool for repeat topographic surveying of dryland erosion. These methods are particularly appealing due to the ability to cover large landscapes compared to field methods and at reduced costs and hig...
NASA Astrophysics Data System (ADS)
Xu, Jun; Kong, Fan
2018-05-01
Extreme value distribution (EVD) evaluation is a critical topic in reliability analysis of nonlinear structural dynamic systems. In this paper, a new method is proposed to obtain the EVD. The maximum entropy method (MEM) with fractional moments as constraints is employed to derive the entire range of EVD. Then, an adaptive cubature formula is proposed for fractional moments assessment involved in MEM, which is closely related to the efficiency and accuracy for reliability analysis. Three point sets, which include a total of 2d2 + 1 integration points in the dimension d, are generated in the proposed formula. In this regard, the efficiency of the proposed formula is ensured. Besides, a "free" parameter is introduced, which makes the proposed formula adaptive with the dimension. The "free" parameter is determined by arranging one point set adjacent to the boundary of the hyper-sphere which contains the bulk of total probability. In this regard, the tail distribution may be better reproduced and the fractional moments could be evaluated with accuracy. Finally, the proposed method is applied to a ten-storey shear frame structure under seismic excitations, which exhibits strong nonlinearity. The numerical results demonstrate the efficacy of the proposed method.
A framework for improving the cost-effectiveness of DSM program evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenblick, R.; Eto, J.
The prudence of utility demand-side management (DSM) investments hinges on their performance, yet evaluating performance is complicated because the energy saved by DSM programs can never be observed directly but only inferred. This study frames and begins to answer the following questions: (1) how well do current evaluation methods perform in improving confidence in the measurement of energy savings produced by DSM programs; (2) in view of this performance, how can limited evaluation resources be best allocated to maximize the value of the information they provide? The authors review three major classes of methods for estimating annual energy savings: trackingmore » database (sometimes called engineering estimates), end-use metering, and billing analysis and examine them in light of the uncertainties in current estimates of DSM program measure lifetimes. The authors assess the accuracy and precision of each method and construct trade-off curves to examine the costs of increases in accuracy or precision. Several approaches for improving evaluations for the purpose of assessing program cost effectiveness are demonstrated. The methods can be easily generalized to other evaluation objectives, such as shared savings incentive payments.« less
Heidelberg Retina Tomograph 3 machine learning classifiers for glaucoma detection
Townsend, K A; Wollstein, G; Danks, D; Sung, K R; Ishikawa, H; Kagemann, L; Gabriele, M L; Schuman, J S
2010-01-01
Aims To assess performance of classifiers trained on Heidelberg Retina Tomograph 3 (HRT3) parameters for discriminating between healthy and glaucomatous eyes. Methods Classifiers were trained using HRT3 parameters from 60 healthy subjects and 140 glaucomatous subjects. The classifiers were trained on all 95 variables and smaller sets created with backward elimination. Seven types of classifiers, including Support Vector Machines with radial basis (SVM-radial), and Recursive Partitioning and Regression Trees (RPART), were trained on the parameters. The area under the ROC curve (AUC) was calculated for classifiers, individual parameters and HRT3 glaucoma probability scores (GPS). Classifier AUCs and leave-one-out accuracy were compared with the highest individual parameter and GPS AUCs and accuracies. Results The highest AUC and accuracy for an individual parameter were 0.848 and 0.79, for vertical cup/disc ratio (vC/D). For GPS, global GPS performed best with AUC 0.829 and accuracy 0.78. SVM-radial with all parameters showed significant improvement over global GPS and vC/ D with AUC 0.916 and accuracy 0.85. RPART with all parameters provided significant improvement over global GPS with AUC 0.899 and significant improvement over global GPS and vC/D with accuracy 0.875. Conclusions Machine learning classifiers of HRT3 data provide significant enhancement over current methods for detection of glaucoma. PMID:18523087
NASA Astrophysics Data System (ADS)
Kamal, Muhammad; Johansen, Kasper
2017-10-01
Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.
Fecal Indicator Bacteria and Environmental Observations: Validation of Virtual Beach
Contamination of recreational waters by fecal material is often assessed using indicator bacteria such as enterococci. Enumeration based on culturing methods can take up to 48 hours to complete, limiting the accuracy of water quality evaluations. Molecular microbial techniques em...
LANDSAT-4 Science Characterization Early Results. Volume 3, Part 2: Thematic Mapper (TM)
NASA Technical Reports Server (NTRS)
Barker, J. L. (Editor)
1985-01-01
The calibration of the LANDSAT 4 thematic mapper is discussed as well as the atmospheric, radiometric, and geometric accuracy and correction of data obtained with this sensor. Methods are given for assessing TM band to band registration.
Thematic and positional accuracy assessment of digital remotely sensed data
Russell G. Congalton
2007-01-01
Accuracy assessment or validation has become a standard component of any land cover or vegetation map derived from remotely sensed data. Knowing the accuracy of the map is vital to any decisionmaking performed using that map. The process of assessing the map accuracy is time consuming and expensive. It is very important that the procedure be well thought out and...
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.
Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-03-01
Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.
Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics
Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris
2016-01-01
As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314
Laboratory and field based evaluation of chromatography ...
The Monitor for AeRosols and GAses in ambient air (MARGA) is an on-line ion-chromatography-based instrument designed for speciation of the inorganic gas and aerosol ammonium-nitrate-sulfate system. Previous work to characterize the performance of the MARGA has been primarily based on field comparison to other measurement methods to evaluate accuracy. While such studies are useful, the underlying reasons for disagreement among methods are not always clear. This study examines aspects of MARGA accuracy and precision specifically related to automated chromatography analysis. Using laboratory standards, analytical accuracy, precision, and method detection limits derived from the MARGA chromatography software are compared to an alternative software package (Chromeleon, Thermo Scientific Dionex). Field measurements are used to further evaluate instrument performance, including the MARGA’s use of an internal LiBr standard to control accuracy. Using gas/aerosol ratios and aerosol neutralization state as a case study, the impact of chromatography on measurement error is assessed. The new generation of on-line chromatography-based gas and particle measurement systems have many advantages, including simultaneous analysis of multiple pollutants. The Monitor for Aerosols and Gases in Ambient Air (MARGA) is such an instrument that is used in North America, Europe, and Asia for atmospheric process studies as well as routine monitoring. While the instrument has been evaluat
Application of 3D-MR image registration to monitor diseases around the knee joint.
Takao, Masaki; Sugano, Nobuhiko; Nishii, Takashi; Miki, Hidenobu; Koyama, Tsuyoshi; Masumoto, Jun; Sato, Yoshinobu; Tamura, Shinichi; Yoshikawa, Hideki
2005-11-01
To estimate the accuracy and consistency of a method using a voxel-based MR image registration algorithm for precise monitoring of knee joint diseases. Rigid body transformation was calculated using a normalized cross-correlation (NCC) algorithm involving simple manual segmentation of the bone region based on its anatomical features. The accuracy of registration was evaluated using four phantoms, followed by a consistency test using MR data from the 11 patients with knee joint disease. The registration accuracy in the phantom experiment was 0.49+/-0.19 mm (SD) for the femur and 0.56+/-0.21 mm (SD) for the tibia. The consistency value in the experiment using clinical data was 0.69+/-0.25 mm (SD) for the femur and 0.77+/-0.37 mm (SD) for the tibia. These values were all smaller than a voxel (1.25 x 1.25 x 1.5 mm). The present method based on an NCC algorithm can be used to register serial MR images of the knee joint with error on the order of a sub-voxel. This method would be useful for precisely assessing therapeutic response and monitoring knee joint diseases; normalized cross-correlation; accuracy. J. Magn. Reson. Imaging 2005. (c) 2005 Wiley-Liss, Inc.
QuickBird and OrbView-3 Geopositional Accuracy Assessment
NASA Technical Reports Server (NTRS)
Helder, Dennis; Ross, Kenton
2006-01-01
Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.
Large Area Crop Inventory Experiment (LACIE). Phase 2 evaluation report
NASA Technical Reports Server (NTRS)
1977-01-01
Documentation of the activities of the Large Area Crop Inventory Experiment during the 1976 Northern Hemisphere crop year is presented. A brief overview of the experiment is included as well as phase two area, yield, and production estimates for the United States Great Plains, Canada, and the Union of Soviet Socialist Republics spring winter wheat regions. The accuracies of these estimates are compared with independent government estimates. Accuracy assessment of the United States Great Plains yardstick region based on a through blind sight analysis is given, and reasons for variations in estimating performance are discussed. Other phase two technical activities including operations, exploratory analysis, reporting, methods of assessment, phase three and advanced system design, technical issues, and developmental activities are also included.
NASA Technical Reports Server (NTRS)
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
Multipole moments in the effective fragment potential method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.
In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less
Multipole moments in the effective fragment potential method
Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.; ...
2017-02-17
In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S
2017-02-01
Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.
Effect of genotyped cows in the reference population on the genomic evaluation of Holstein cattle.
Uemoto, Y; Osawa, T; Saburi, J
2017-03-01
This study evaluated the dependence of reliability and prediction bias on the prediction method, the contribution of including animals (bulls or cows), and the genetic relatedness, when including genotyped cows in the progeny-tested bull reference population. We performed genomic evaluation using a Japanese Holstein population, and assessed the accuracy of genomic enhanced breeding value (GEBV) for three production traits and 13 linear conformation traits. A total of 4564 animals for production traits and 4172 animals for conformation traits were genotyped using Illumina BovineSNP50 array. Single- and multi-step methods were compared for predicting GEBV in genotyped bull-only and genotyped bull-cow reference populations. No large differences in realized reliability and regression coefficient were found between the two reference populations; however, a slight difference was found between the two methods for production traits. The accuracy of GEBV determined by single-step method increased slightly when genotyped cows were included in the bull reference population, but decreased slightly by multi-step method. A validation study was used to evaluate the accuracy of GEBV when 800 additional genotyped bulls (POPbull) or cows (POPcow) were included in the base reference population composed of 2000 genotyped bulls. The realized reliabilities of POPbull were higher than those of POPcow for all traits. For the gain of realized reliability over the base reference population, the average ratios of POPbull gain to POPcow gain for production traits and conformation traits were 2.6 and 7.2, respectively, and the ratios depended on heritabilities of the traits. For regression coefficient, no large differences were found between the results for POPbull and POPcow. Another validation study was performed to investigate the effect of genetic relatedness between cows and bulls in the reference and test populations. The effect of genetic relationship among bulls in the reference population was also assessed. The results showed that it is important to account for relatedness among bulls in the reference population. Our studies indicate that the prediction method, the contribution ratio of including animals, and genetic relatedness could affect the prediction accuracy in genomic evaluation of Holstein cattle, when including genotyped cows in the reference population.
Empirical evaluation of data normalization methods for molecular classification
Huang, Huei-Chung
2018-01-01
Background Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers—an increasingly important application of microarrays in the era of personalized medicine. Methods In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. Results In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Conclusion Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy. PMID:29666754
Improved scheme for Cross-track Infrared Sounder geolocation assessment and optimization
NASA Astrophysics Data System (ADS)
Wang, Likun; Zhang, Bin; Tremblay, Denis; Han, Yong
2017-01-01
An improved scheme for Cross-track Infrared Sounder (CrIS) geolocation assessment for all scan angles (from -48.5° to 48.5°) is developed in this study. The method uses spatially collocated radiance measurements from the Visible Infrared Imaging Radiometer Suite (VIIRS) image band I5 to evaluate the geolocation performance of the CrIS Sensor Data Records (SDR) by taking advantage of its high spatial resolution (375 m at nadir) and accurate geolocation. The basic idea is to perturb CrIS line-of-sight vectors along the in-track and cross-track directions to find a position where CrIS and VIIRS data matches more closely. The perturbation angles at this best matched position are then used to evaluate the CrIS geolocation accuracy. More importantly, the new method is capable of performing postlaunch on-orbit geometric calibration by optimizing mapping angle parameters based on the assessment results and thus can be further extended to the following CrIS sensors on new satellites. Finally, the proposed method is employed to evaluate the CrIS geolocation accuracy on current Suomi National Polar-orbiting Partnership satellite. The error characteristics are revealed along the scan positions in the in-track and cross-track directions. It is found that there are relatively large errors ( 4 km) in the cross-track direction close to the end of scan positions. With newly updated mapping angles, the geolocation accuracy is greatly improved for all scan positions (less than 0.3 km). This makes CrIS and VIIRS spatially align together and thus benefits the application that needs combination of CrIS and VIIRS measurements and products.
Jensen, Pamela K; Wujcik, Chad E; McGuire, Michelle K; McGuire, Mark A
2016-01-01
Simple high-throughput procedures were developed for the direct analysis of glyphosate [N-(phosphonomethyl)glycine] and aminomethylphosphonic acid (AMPA) in human and bovine milk and human urine matrices. Samples were extracted with an acidified aqueous solution on a high-speed shaker. Stable isotope labeled internal standards were added with the extraction solvent to ensure accurate tracking and quantitation. An additional cleanup procedure using partitioning with methylene chloride was required for milk matrices to minimize the presence of matrix components that can impact the longevity of the analytical column. Both analytes were analyzed directly, without derivatization, by liquid chromatography tandem mass spectrometry using two separate precursor-to-product transitions that ensure and confirm the accuracy of the measured results. Method performance was evaluated during validation through a series of assessments that included linearity, accuracy, precision, selectivity, ionization effects and carryover. Limits of quantitation (LOQ) were determined to be 0.1 and 10 µg/L (ppb) for urine and milk, respectively, for both glyphosate and AMPA. Mean recoveries for all matrices were within 89-107% at three separate fortification levels including the LOQ. Precision for replicates was ≤ 7.4% relative standard deviation (RSD) for milk and ≤ 11.4% RSD for urine across all fortification levels. All human and bovine milk samples used for selectivity and ionization effects assessments were free of any detectable levels of glyphosate and AMPA. Some of the human urine samples contained trace levels of glyphosate and AMPA, which were background subtracted for accuracy assessments. Ionization effects testing showed no significant biases from the matrix. A successful independent external validation was conducted using the more complicated milk matrices to demonstrate method transferability.
Uncertainty of fast biological radiation dose assessment for emergency response scenarios.
Ainsbury, Elizabeth A; Higueras, Manuel; Puig, Pedro; Einbeck, Jochen; Samaga, Daniel; Barquinero, Joan Francesc; Barrios, Lleonard; Brzozowska, Beata; Fattibene, Paola; Gregoire, Eric; Jaworska, Alicja; Lloyd, David; Oestreicher, Ursula; Romm, Horst; Rothkamm, Kai; Roy, Laurence; Sommer, Sylwester; Terzoudi, Georgia; Thierens, Hubert; Trompier, Francois; Vral, Anne; Woda, Clemens
2017-01-01
Reliable dose estimation is an important factor in appropriate dosimetric triage categorization of exposed individuals to support radiation emergency response. Following work done under the EU FP7 MULTIBIODOSE and RENEB projects, formal methods for defining uncertainties on biological dose estimates are compared using simulated and real data from recent exercises. The results demonstrate that a Bayesian method of uncertainty assessment is the most appropriate, even in the absence of detailed prior information. The relative accuracy and relevance of techniques for calculating uncertainty and combining assay results to produce single dose and uncertainty estimates is further discussed. Finally, it is demonstrated that whatever uncertainty estimation method is employed, ignoring the uncertainty on fast dose assessments can have an important impact on rapid biodosimetric categorization.
ERIC Educational Resources Information Center
Wade, Ros; Corbett, Mark; Eastwood, Alison
2013-01-01
Assessing the quality of included studies is a vital step in undertaking a systematic review. The recently revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool (QUADAS-2), which is the only validated quality assessment tool for diagnostic accuracy studies, does not include specific criteria for assessing comparative studies. As…
Mapping the Daily Progression of Large Wildland Fires Using MODIS Active Fire Data
NASA Technical Reports Server (NTRS)
Veraverbeke, Sander; Sedano, Fernando; Hook, Simon J.; Randerson, James T.; Jin, Yufang; Rogers, Brendan
2013-01-01
High temporal resolution information on burned area is a prerequisite for incorporating bottom-up estimates of wildland fire emissions in regional air transport models and for improving models of fire behavior. We used the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product (MO(Y)D14) as input to a kriging interpolation to derive continuous maps of the evolution of nine large wildland fires. For each fire, local input parameters for the kriging model were defined using variogram analysis. The accuracy of the kriging model was assessed using high resolution daily fire perimeter data available from the U.S. Forest Service. We also assessed the temporal reporting accuracy of the MODIS burned area products (MCD45A1 and MCD64A1). Averaged over the nine fires, the kriging method correctly mapped 73% of the pixels within the accuracy of a single day, compared to 33% for MCD45A1 and 53% for MCD64A1.
Lotfy, Hayam M; Hegazy, Maha A; Rezk, Mamdouh R; Omran, Yasmin Rostom
2015-09-05
Smart spectrophotometric methods have been applied and validated for the simultaneous determination of a binary mixture of chloramphenicol (CPL) and prednisolone acetate (PA) without preliminary separation. Two novel methods have been developed; the first method depends upon advanced absorbance subtraction (AAS), while the other method relies on advanced amplitude modulation (AAM); in addition to the well established dual wavelength (DW), ratio difference (RD) and constant center coupled with spectrum subtraction (CC-SS) methods. Accuracy, precision and linearity ranges of these methods were determined. Moreover, selectivity was assessed by analyzing synthetic mixtures of both drugs. The proposed methods were successfully applied to the assay of drugs in their pharmaceutical formulations. No interference was observed from common additives and the validity of the methods was tested. The obtained results have been statistically compared to that of official spectrophotometric methods to give a conclusion that there is no significant difference between the proposed methods and the official ones with respect to accuracy and precision. Copyright © 2015 Elsevier B.V. All rights reserved.
Men, Hong; Fu, Songlin; Yang, Jialin; Cheng, Meiqi; Shi, Yan
2018-01-01
Paraffin odor intensity is an important quality indicator when a paraffin inspection is performed. Currently, paraffin odor level assessment is mainly dependent on an artificial sensory evaluation. In this paper, we developed a paraffin odor analysis system to classify and grade four kinds of paraffin samples. The original feature set was optimized using Principal Component Analysis (PCA) and Partial Least Squares (PLS). Support Vector Machine (SVM), Random Forest (RF), and Extreme Learning Machine (ELM) were applied to three different feature data sets for classification and level assessment of paraffin. For classification, the model based on SVM, with an accuracy rate of 100%, was superior to that based on RF, with an accuracy rate of 98.33–100%, and ELM, with an accuracy rate of 98.01–100%. For level assessment, the R2 related to the training set was above 0.97 and the R2 related to the test set was above 0.87. Through comprehensive comparison, the generalization of the model based on ELM was superior to those based on SVM and RF. The scoring errors for the three models were 0.0016–0.3494, lower than the error of 0.5–1.0 measured by industry standard experts, meaning these methods have a higher prediction accuracy for scoring paraffin level. PMID:29346328
Objective and subjective olfaction across the schizophrenia spectrum.
Auster, Tracey L; Cohen, Alex S; Callaway, Dallas A; Brown, Laura A
2014-01-01
Much research indicates that patients with schizophrenia have impaired olfaction detection ability. However, studies of individuals with psychometrically defined schizotypy reveal mixed results-some document impairments while others do not. In addition to deficits in objective accuracy in olfaction for patients with schizophrenia, there has been an interest in subjective experience of olfaction. Unfortunately, methods of assessing accuracy and subjective hedonic olfactory evaluations in prior studies may not have been sensitive enough to detect group differences in this area. This study employed a measure of olfactory functioning featuring an expanded scoring system to assess both accuracy and subjective evaluations of pleasant and unpleasant experience. Data were collected for patients with schizophrenia, young adults with psychometrically defined schizotypy, psychiatric outpatients, and healthy controls. Results of this study indicate that both the schizophrenia and outpatient psychiatric groups showed similar levels of impaired olfaction ability; however, the schizotypy group was not impaired in olfaction detection. Interestingly, with regard to subjective hedonic evaluation, it was found that patients with schizophrenia did not differ from psychiatric outpatients, whereas individuals with schizotypy tended to rate smells as significantly less pleasant than healthy control participants. This suggests that subjective olfactory assessment is abnormal in some manner in schizotypy. It also suggests that accuracy of olfaction identification may be a characteristic of severe mental illness across severe mental illness diagnoses. The results are potentially important for understanding olfaction deficits across the mental illness spectrum.
Accuracy assessment of fluoroscopy-transesophageal echocardiography registration
NASA Astrophysics Data System (ADS)
Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.
2011-03-01
This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.
de Albuquerque, Priscila Maria Nascimento Martins; de Alencar, Geisa Guimarães; de Oliveira, Daniela Araújo; de Siqueira, Gisela Rocha
2018-01-01
The aim of this study was to examine and interpret the concordance, accuracy, and reliability of photogrammetric protocols available in the literature for evaluating cervical lordosis in an adult population aged 18 to 59 years. A systematic search of 6 electronic databases (MEDLINE via PubMed, LILACS, CINAHL, Scopus, ScienceDirect, and Web of Science) located studies that assessed the reliability and/or concordance and/or accuracy of photogrammetric protocols for evaluating cervical lordosis, compared with radiography. Articles published through April 2016 were selected. Two independent reviewers used a critical appraisal tool (QUADAS and QAREL) to assess the quality of the selected studies. Two studies were included in the review and had high levels of reliability (intraclass correlation coefficient: 0.974-0.98). Only 1 study assessed the concordance between the methods, which was calculated using Pearson's correlation coefficient. To date, the accuracy of photogrammetry has not been investigated thoroughly. We encountered no study in the literature that investigated the accuracy of photogrammetry in diagnosing hyperlordosis of cervical spine. However, both current studies report high levels of intra- and interrater reliability. To increase the level of evidence of photogrammetry in the evaluation of cervical lordosis, it is necessary to conduct further studies using a larger sample to increase the external validity of the findings. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Castillo, Carlos; Pérez, Rafael
2017-04-01
The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005
Young, Helen T M; Carr, Norman J; Green, Bryan; Tilley, Charles; Bhargava, Vidhi; Pearce, Neil
2013-08-01
To compare the accuracy of eyeball estimates of the Ki-67 proliferation index (PI) with formal counting of 2000 cells as recommend by the Royal College of Pathologists. Sections from gastroenteropancreatic neuroendocrine tumours were immunostained for Ki-67. PI was calculated using three methods: (1) a manual tally count of 2000 cells from the area of highest nuclear labelling using a microscope eyepiece graticule; (2) eyeball estimates made by four pathologists within the same area of highest nuclear labelling; and (3) image analysis of microscope photographs taken from this area using the ImageJ 'cell counter' tool. ImageJ analysis was considered the gold standard for comparison. Levels of agreement between methods were evaluated using Bland-Altman plots. Agreement between the manual tally and ImageJ assessments was very high at low PIs. Agreement between eyeball assessments and ImageJ analysis varied between pathologists. Where data for low PIs alone were analysed, there was a moderate level of agreement between pathologists' estimates and the gold standard, but when all data were included, agreement was poor. Manual tally counts of 2000 cells exhibited similar levels of accuracy to the gold standard, especially at low PIs. Eyeball estimates were significantly less accurate than the gold standard. This suggests that tumour grades may be misclassified by eyeballing and that formal tally counting of positive cells produces more reliable results. Further studies are needed to identify accurate clinically appropriate ways of calculating.
How reliable is apparent age at death on cadavers?
Amadasi, Alberto; Merusi, Nicolò; Cattaneo, Cristina
2015-07-01
The assessment of age at death for identification purposes is a frequent and tough challenge for forensic pathologists and anthropologists. Too frequently, visual assessment of age is performed on well-preserved corpses, a method considered subjective and full of pitfalls, but whose level of inadequacy no one has yet tested or proven. This study consisted in the visual estimation of the age of 100 cadavers performed by a total of 37 observers among those usually attending the dissection room. Cadavers were of Caucasian ethnicity, well preserved, belonging to individuals who died of natural death. All the evaluations were performed prior to autopsy. Observers assessed the age with ranges of 5 and 10 years, indicating also the body part they mainly observed for each case. Globally, the 5-year range had an accuracy of 35%, increasing to 69% with the 10-year range. The highest accuracy was in the 31-60 age category (74.7% with the 10-year range), and the skin seemed to be the most reliable age parameter (71.5% of accuracy when observed), while the face was considered most frequently, in 92.4% of cases. A simple formula with the general "mean of averages" in the range given by the observers and related standard deviations was then developed; the average values with standard deviations of 4.62 lead to age estimation with ranges of some 20 years that seem to be fairly reliable and suitable, sometimes in alignment with classic anthropological methods, in the age estimation of well-preserved corpses.
NASA Astrophysics Data System (ADS)
Kalayeh, Mahdi M.; Marin, Thibault; Pretorius, P. Hendrik; Wernick, Miles N.; Yang, Yongyi; Brankov, Jovan G.
2011-03-01
In this paper, we present a numerical observer for image quality assessment, aiming to predict human observer accuracy in a cardiac perfusion defect detection task for single-photon emission computed tomography (SPECT). In medical imaging, image quality should be assessed by evaluating the human observer accuracy for a specific diagnostic task. This approach is known as task-based assessment. Such evaluations are important for optimizing and testing imaging devices and algorithms. Unfortunately, human observer studies with expert readers are costly and time-demanding. To address this problem, numerical observers have been developed as a surrogate for human readers to predict human diagnostic performance. The channelized Hotelling observer (CHO) with internal noise model has been found to predict human performance well in some situations, but does not always generalize well to unseen data. We have argued in the past that finding a model to predict human observers could be viewed as a machine learning problem. Following this approach, in this paper we propose a channelized relevance vector machine (CRVM) to predict human diagnostic scores in a detection task. We have previously used channelized support vector machines (CSVM) to predict human scores and have shown that this approach offers better and more robust predictions than the classical CHO method. The comparison of the proposed CRVM with our previously introduced CSVM method suggests that CRVM can achieve similar generalization accuracy, while dramatically reducing model complexity and computation time.
Optimization of hole generation in Ti/CFRP stacks
NASA Astrophysics Data System (ADS)
Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.
2018-03-01
The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.
A ground truth based comparative study on clustering of gene expression data.
Zhu, Yitan; Wang, Zuyi; Miller, David J; Clarke, Robert; Xuan, Jianhua; Hoffman, Eric P; Wang, Yue
2008-05-01
Given the variety of available clustering methods for gene expression data analysis, it is important to develop an appropriate and rigorous validation scheme to assess the performance and limitations of the most widely used clustering algorithms. In this paper, we present a ground truth based comparative study on the functionality, accuracy, and stability of five data clustering methods, namely hierarchical clustering, K-means clustering, self-organizing maps, standard finite normal mixture fitting, and a caBIG toolkit (VIsual Statistical Data Analyzer--VISDA), tested on sample clustering of seven published microarray gene expression datasets and one synthetic dataset. We examined the performance of these algorithms in both data-sufficient and data-insufficient cases using quantitative performance measures, including cluster number detection accuracy and mean and standard deviation of partition accuracy. The experimental results showed that VISDA, an interactive coarse-to-fine maximum likelihood fitting algorithm, is a solid performer on most of the datasets, while K-means clustering and self-organizing maps optimized by the mean squared compactness criterion generally produce more stable solutions than the other methods.
Dynamic thresholds and a summary ROC curve: Assessing prognostic accuracy of longitudinal markers.
Saha-Chaudhuri, P; Heagerty, P J
2018-04-19
Cancer patients, chronic kidney disease patients, and subjects infected with HIV are routinely monitored over time using biomarkers that represent key health status indicators. Furthermore, biomarkers are frequently used to guide initiation of new treatments or to inform changes in intervention strategies. Since key medical decisions can be made on the basis of a longitudinal biomarker, it is important to evaluate the potential accuracy associated with longitudinal monitoring. To characterize the overall accuracy of a time-dependent marker, we introduce a summary ROC curve that displays the overall sensitivity associated with a time-dependent threshold that controls time-varying specificity. The proposed statistical methods are similar to concepts considered in disease screening, yet our methods are novel in choosing a potentially time-dependent threshold to define a positive test, and our methods allow time-specific control of the false-positive rate. The proposed summary ROC curve is a natural averaging of time-dependent incident/dynamic ROC curves and therefore provides a single summary of net error rates that can be achieved in the longitudinal setting. Copyright © 2018 John Wiley & Sons, Ltd.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
NASA Astrophysics Data System (ADS)
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
Suicide Risk Assessment and Prevention: A Systematic Review Focusing on Veterans.
Nelson, Heidi D; Denneson, Lauren M; Low, Allison R; Bauer, Brian W; O'Neil, Maya; Kansagara, Devan; Teo, Alan R
2017-10-01
Suicide rates in veteran and military populations in the United States are high. This article reviews studies of the accuracy of methods to identify individuals at increased risk of suicide and the effectiveness and adverse effects of health care interventions relevant to U.S. veteran and military populations in reducing suicide and suicide attempts. Trials, observational studies, and systematic reviews relevant to U.S. veterans and military personnel were identified in searches of MEDLINE, PsycINFO, SocINDEX, and Cochrane databases (January 1, 2008, to September 11, 2015), on Web sites, and in reference lists. Investigators extracted and confirmed data and dual-rated risk of bias for included studies. Nineteen studies evaluated accuracy of risk assessment methods, including models using retrospective electronic records data and clinician- or patient-rated instruments. Most methods demonstrated sensitivity ≥80% or area-under-the-curve values ≥.70 in single studies, including two studies based on electronic records of veterans and military personnel, but specificity varied. Suicide rates were reduced in six of eight observational studies of population-level interventions. Only two of ten trials of individual-level psychotherapy reported statistically significant differences between treatment and usual care. Risk assessment methods have been shown to be sensitive predictors of suicide and suicide attempts, but the frequency of false positives limits their clinical utility. Research to refine these methods and examine clinical applications is needed. Studies of suicide prevention interventions are inconclusive; trials of population-level interventions and promising therapies are required to support their clinical use.
Thurber, Katherine A; Banks, Emily; Banwell, Cathy
2014-11-28
Despite the burgeoning research interest in weight status, in parallel with the increase in obesity worldwide, research describing methods to optimise the validity and accuracy of measured anthropometric data is lacking. Even when 'gold standard' methods are employed, no data are 100% accurate, yet the accuracy of anthropometric data is critical to produce robust and interpretable findings. To date, described methods for identifying data that are likely to be inaccurate seem to be ad hoc or lacking in clear justification. This paper reviews approaches to evaluating the accuracy of cross-sectional and longitudinal data on height and weight in children, focusing on recommendations from the World Health Organization (WHO). This review, together with expert consultation, informed the development of a method for processing and verifying longitudinal anthropometric measurements of children. This approach was then applied to data from the Australian Longitudinal Study of Indigenous Children. The review identified the need to assess the likely plausibility of data by (a) examining deviation from the WHO reference population by calculating age- and sex-adjusted height, weight and body mass index z-scores, and (b) examining changes in height and weight in individuals over time. The method developed identified extreme measurements and implausible intraindividual trajectories. It provides evidence-based criteria for the exclusion of data points that are most likely to be affected by measurement error. This paper presents a probabilistic approach to identifying anthropometric measurements that are likely to be implausible. This systematic, practical method is intended to be reproducible in other settings, including for validating large databases.
A novel technique for fetal heart rate estimation from Doppler ultrasound signal
2011-01-01
Background The currently used fetal monitoring instrumentation that is based on Doppler ultrasound technique provides the fetal heart rate (FHR) signal with limited accuracy. It is particularly noticeable as significant decrease of clinically important feature - the variability of FHR signal. The aim of our work was to develop a novel efficient technique for processing of the ultrasound signal, which could estimate the cardiac cycle duration with accuracy comparable to a direct electrocardiography. Methods We have proposed a new technique which provides the true beat-to-beat values of the FHR signal through multiple measurement of a given cardiac cycle in the ultrasound signal. The method consists in three steps: the dynamic adjustment of autocorrelation window, the adaptive autocorrelation peak detection and determination of beat-to-beat intervals. The estimated fetal heart rate values and calculated indices describing variability of FHR, were compared to the reference data obtained from the direct fetal electrocardiogram, as well as to another method for FHR estimation. Results The results revealed that our method increases the accuracy in comparison to currently used fetal monitoring instrumentation, and thus enables to calculate reliable parameters describing the variability of FHR. Relating these results to the other method for FHR estimation we showed that in our approach a much lower number of measured cardiac cycles was rejected as being invalid. Conclusions The proposed method for fetal heart rate determination on a beat-to-beat basis offers a high accuracy of the heart interval measurement enabling reliable quantitative assessment of the FHR variability, at the same time reducing the number of invalid cardiac cycle measurements. PMID:21999764
NASA Astrophysics Data System (ADS)
Arola, Antti; Kalliskota, S.; den Outer, P. N.; Edvardsen, K.; Hansen, G.; Koskela, T.; Martin, T. J.; Matthijsen, J.; Meerkoetter, R.; Peeters, P.; Seckmeyer, G.; Simon, P. C.; Slaper, H.; Taalas, P.; Verdebout, J.
2002-08-01
Four different satellite-UV mapping methods are assessed by comparing them against ground-based measurements. The study includes most of the variability found in geographical, meteorological and atmospheric conditions. Three of the methods did not show any significant systematic bias, except during snow cover. The mean difference (bias) in daily doses for the Rijksinstituut voor Volksgezondheid en Milieu (RIVM) and Joint Research Centre (JRC) methods was found to be less than 10% with a RMS difference of the order of 30%. The Deutsches Zentrum für Luft- und Raumfahrt (DLR) method was assessed for a few selected months, and the accuracy was similar to the RIVM and JRC methods. It was additionally used to demonstrate how spatial averaging of high-resolution cloud data improves the estimation of UV daily doses. For the Institut d'Aéronomie Spatiale de Belgique (IASB) method the differences were somewhat higher, because of their original cloud algorithm. The mean difference in daily doses for IASB was about 30% or more, depending on the station, while the RMS difference was about 60%. The cloud algorithm of IASB has been replaced recently, and as a result the accuracy of the IASB method has improved. Evidence is found that further research and development should focus on the improvement of the cloud parameterization. Estimation of daily exposures is likely to be improved if additional time-resolved cloudiness information is available for the satellite-based methods. It is also demonstrated that further development work should be carried out on the treatment of albedo of snow-covered surfaces.
Buczinski, Sébastien; L Ollivett, Terri; Dendukuri, Nandini
2015-05-01
There is currently no gold standard method for the diagnosis of bovine respiratory disease (BRD) complex in Holstein pre-weaned dairy calves. Systematic thoracic ultrasonography (TUS) has been used as a proxy for BRD, but cannot be directly used by producers. The Wisconsin calf respiratory scoring chart (CRSC) is a simpler alternative, but with unknown accuracy. Our objective was to estimate the accuracy of CRSC, while adjusting for the lack of a gold standard. Two cross sectional study populations with a high BRD prevalence (n=106 pre-weaned Holstein calves) and an average BRD prevalence (n=85 pre-weaned Holstein calves) from North America were studied. All calves were simultaneously assessed using CRSC (cutoff used ≥ 5) and TUS (cutoff used ≥ 1cm of lung consolidation). Bayesian latent class models allowing for conditional dependence were used with informative priors for BRD prevalence and TUS accuracy (sensitivity (Se) and specificity (Sp)) and non-informative priors for CRSC accuracies. Robustness of the model was tested by relaxing priors for prevalence or TUS accuracy. The SeCRSC (95% credible interval (CI)) and SpCRSC were 62.4% (47.9-75.8) and 74.1% (64.9-82.8) respectively. The SeTUS was 79.4% (66.4-90.9) and SpTUS was 93.9% (88.0-97.6). The imperfect accuracy of CRSC and TUS should be taken into account when using those tools to assess BRD status. Copyright © 2015 Elsevier B.V. All rights reserved.
Huang, Chen-Yu; Keall, Paul; Rice, Adam; Colvill, Emma; Ng, Jin Aun; Booth, Jeremy T
2017-09-01
Inter-fraction and intra-fraction motion management methods are increasingly applied clinically and require the development of advanced motion platforms to facilitate testing and quality assurance program development. The aim of this study was to assess the performance of a 5 degrees-of-freedom (DoF) programmable motion platform HexaMotion (ScandiDos, Uppsala, Sweden) towards clinically observed tumor motion range, velocity, acceleration and the accuracy requirements of SABR prescribed in AAPM Task Group 142. Performance specifications for the motion platform were derived from literature regarding the motion characteristics of prostate and lung tumor targets required for real time motion management. The performance of the programmable motion platform was evaluated against (1) maximum range, velocity and acceleration (5 DoF), (2) static position accuracy (5 DoF) and (3) dynamic position accuracy using patient-derived prostate and lung tumor motion traces (3 DoF). Translational motion accuracy was compared against electromagnetic transponder measurements. Rotation was benchmarked with a digital inclinometer. The static accuracy and reproducibility for translation and rotation was <0.1 mm or <0.1°, respectively. The accuracy of reproducing dynamic patient motion was <0.3 mm. The motion platform's range met the need to reproduce clinically relevant translation and rotation ranges and its accuracy met the TG 142 requirements for SABR. The range, velocity and acceleration of the motion platform are sufficient to reproduce lung and prostate tumor motion for motion management. Programmable motion platforms are valuable tools in the investigation, quality assurance and commissioning of motion management systems in radiation oncology.
Wilson, Gary L.; Richards, Joseph M.
2006-01-01
Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.
NASA Astrophysics Data System (ADS)
Tanaka, Shinobu; Hayakawa, Yuuto; Ogawa, Mitsuhiro; Yamakoshi, Ken-ichi
2010-08-01
We have been developing a new technique for measuring urine glucose concentration using near infrared spectroscopy (NIRS) in conjunction with the Partial Least Square (PLS) method. In the previous study, we reported some results of preliminary experiments for assessing feasibility of this method using a FT-IR spectrometer. In this study, considering practicability of the system, a flow-through cell with the optical path length of 10 mm was newly introduced. Accuracy of the system was verified by the preliminary experiments using urine samples. From the results obtained, it was clearly demonstrated that the present method had a capability of predicting individual urine glucose level with reasonable accuracy (the minimum value of standard error of prediction: SEP = 22.3 mg/dl) and appeared to be a useful means for long-term home health care. However, mean value of SEP obtained by the urine samples from ten subjects was not satisfactorily low (53.7 mg/dl). For improving the accuracy, (1) mechanical stability of the optical system should be improved, (2) the method for normalizing the spectrum should be reconsidered, and (3) the number of subject should be increased.
Orczyk, C; Rusinek, H; Rosenkrantz, A B; Mikheev, A; Deng, F-M; Melamed, J; Taneja, S S
2013-12-01
To assess a novel method of three-dimensional (3D) co-registration of prostate cancer digital histology and in-vivo multiparametric magnetic resonance imaging (mpMRI) image sets for clinical usefulness. A software platform was developed to achieve 3D co-registration. This software was prospectively applied to three patients who underwent radical prostatectomy. Data comprised in-vivo mpMRI [T2-weighted, dynamic contrast-enhanced weighted images (DCE); apparent diffusion coefficient (ADC)], ex-vivo T2-weighted imaging, 3D-rebuilt pathological specimen, and digital histology. Internal landmarks from zonal anatomy served as reference points for assessing co-registration accuracy and precision. Applying a method of deformable transformation based on 22 internal landmarks, a 1.6 mm accuracy was reached to align T2-weighted images and the 3D-rebuilt pathological specimen, an improvement over rigid transformation of 32% (p = 0.003). The 22 zonal anatomy landmarks were more accurately mapped using deformable transformation than rigid transformation (p = 0.0008). An automatic method based on mutual information, enabled automation of the process and to include perfusion and diffusion MRI images. Evaluation of co-registration accuracy using the volume overlap index (Dice index) met clinically relevant requirements, ranging from 0.81-0.96 for sequences tested. Ex-vivo images of the specimen did not significantly improve co-registration accuracy. This preliminary analysis suggests that deformable transformation based on zonal anatomy landmarks is accurate in the co-registration of mpMRI and histology. Including diffusion and perfusion sequences in the same 3D space as histology is essential further clinical information. The ability to localize cancer in 3D space may improve targeting for image-guided biopsy, focal therapy, and disease quantification in surveillance protocols. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A
2018-02-01
To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Michaleff, Zoe A.; Maher, Chris G.; Verhagen, Arianne P.; Rebbeck, Trudy; Lin, Chung-Wei Christine
2012-01-01
Background: There is uncertainty about the optimal approach to screen for clinically important cervical spine (C-spine) injury following blunt trauma. We conducted a systematic review to investigate the diagnostic accuracy of the Canadian C-spine rule and the National Emergency X-Radiography Utilization Study (NEXUS) criteria, 2 rules that are available to assist emergency physicians to assess the need for cervical spine imaging. Methods: We identified studies by an electronic search of CINAHL, Embase and MEDLINE. We included articles that reported on a cohort of patients who experienced blunt trauma and for whom clinically important cervical spine injury detectable by diagnostic imaging was the differential diagnosis; evaluated the diagnostic accuracy of the Canadian C-spine rule or NEXUS or both; and used an adequate reference standard. We assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies criteria. We used the extracted data to calculate sensitivity, specificity, likelihood ratios and post-test probabilities. Results: We included 15 studies of modest methodologic quality. For the Canadian C-spine rule, sensitivity ranged from 0.90 to 1.00 and specificity ranged from 0.01 to 0.77. For NEXUS, sensitivity ranged from 0.83 to 1.00 and specificity ranged from 0.02 to 0.46. One study directly compared the accuracy of these 2 rules using the same cohort and found that the Canadian C-spine rule had better accuracy. For both rules, a negative test was more informative for reducing the probability of a clinically important cervical spine injury. Interpretation: Based on studies with modest methodologic quality and only one direct comparison, we found that the Canadian C-spine rule appears to have better diagnostic accuracy than the NEXUS criteria. Future studies need to follow rigorous methodologic procedures to ensure that the findings are as free of bias as possible. PMID:23048086
Shen, Yiru; Salley, James; Muth, Eric; Hoover, Adam
2017-01-01
This paper describes a study to test the accuracy of a method that tracks wrist motion during eating to detect and count bites. The purpose was to assess its accuracy across demographic (age, gender, ethnicity) and bite (utensil, container, hand used, food type) variables. Data were collected in a cafeteria under normal eating conditions. A total of 271 participants ate a single meal while wearing a watch-like device to track their wrist motion. Video was simultaneously recorded of each participant and subsequently reviewed to determine the ground truth times of bites. Bite times were operationally defined as the moment when food or beverage was placed into the mouth. Food and beverage choices were not scripted or restricted. Participants were seated in groups of 2–4 and were encouraged to eat naturally. A total of 24,088 bites of 374 different food and beverage items were consumed. Overall the method for automatically detecting bites had a sensitivity of 75% with a positive predictive value of 89%. A range of 62–86% sensitivity was found across demographic variables, with slower eating rates trending towards higher sensitivity. Variations in sensitivity due to food type showed a modest correlation with the total wrist motion during the bite, possibly due to an increase in head-towards-plate motion and decrease in hand-towards-mouth motion for some food types. Overall, the findings provide the largest evidence to date that the method produces a reliable automated measure of intake during unrestricted eating. PMID:28113994
Repeatability and Accuracy of Exoplanet Eclipse Depths Measured with Post-cryogenic Spitzer
NASA Astrophysics Data System (ADS)
Ingalls, James G.; Krick, J. E.; Carey, S. J.; Stauffer, John R.; Lowrance, Patrick J.; Grillmair, Carl J.; Buzasi, Derek; Deming, Drake; Diamond-Lowe, Hannah; Evans, Thomas M.; Morello, G.; Stevenson, Kevin B.; Wong, Ian; Capak, Peter; Glaccum, William; Laine, Seppo; Surace, Jason; Storrie-Lombardi, Lisa
2016-08-01
We examine the repeatability, reliability, and accuracy of differential exoplanet eclipse depth measurements made using the InfraRed Array Camera (IRAC) on the Spitzer Space Telescope during the post-cryogenic mission. We have re-analyzed an existing 4.5 μm data set, consisting of 10 observations of the XO-3b system during secondary eclipse, using seven different techniques for removing correlated noise. We find that, on average, for a given technique, the eclipse depth estimate is repeatable from epoch to epoch to within 156 parts per million (ppm). Most techniques derive eclipse depths that do not vary by more than a factor 3 of the photon noise limit. All methods but one accurately assess their own errors: for these methods, the individual measurement uncertainties are comparable to the scatter in eclipse depths over the 10 epoch sample. To assess the accuracy of the techniques as well as to clarify the difference between instrumental and other sources of measurement error, we have also analyzed a simulated data set of 10 visits to XO-3b, for which the eclipse depth is known. We find that three of the methods (BLISS mapping, Pixel Level Decorrelation, and Independent Component Analysis) obtain results that are within three times the photon limit of the true eclipse depth. When averaged over the 10 epoch ensemble, 5 out of 7 techniques come within 60 ppm of the true value. Spitzer exoplanet data, if obtained following current best practices and reduced using methods such as those described here, can measure repeatable and accurate single eclipse depths, with close to photon-limited results.
A Low-Cost iPhone-Assisted Augmented Reality Solution for the Localization of Intracranial Lesions.
Hou, YuanZheng; Ma, LiChao; Zhu, RuYuan; Chen, XiaoLei; Zhang, Jun
2016-01-01
Precise location of intracranial lesions before surgery is important, but occasionally difficult. Modern navigation systems are very helpful, but expensive. A low-cost solution that could locate brain lesions and their surface projections in augmented reality would be beneficial. We used an iPhone to partially achieve this goal, and evaluated its accuracy and feasibility in a clinical neurosurgery setting. We located brain lesions in 35 patients, and using an iPhone, we depicted the lesion's surface projection onto the skin of the head. To assess the accuracy of this method, we pasted computed tomography (CT) markers surrounding the depicted lesion boundaries on the skin onto 15 patients. CT scans were then performed with or without contrast enhancement. The deviations (D) between the CT markers and the actual lesion boundaries were measured. We found that 97.7% of the markers displayed a high accuracy level (D ≤ 5mm). In the remaining 20 patients, we compared our iPhone-based method with a frameless neuronavigation system. Four check points were chosen on the skin surrounding the depicted lesion boundaries, to assess the deviations between the two methods. The integrated offset was calculated according to the deviations at the four check points. We found that for the supratentorial lesions, the medial offset between these two methods was 2.90 mm and the maximum offset was 4.2 mm. This low-cost, image-based, iPhone-assisted, augmented reality solution is technically feasible, and helpful for the localization of some intracranial lesions, especially shallow supratentorial intracranial lesions of moderate size.
Incorporating structure from motion uncertainty into image-based pose estimation
NASA Astrophysics Data System (ADS)
Ludington, Ben T.; Brown, Andrew P.; Sheffler, Michael J.; Taylor, Clark N.; Berardi, Stephen
2015-05-01
A method for generating and utilizing structure from motion (SfM) uncertainty estimates within image-based pose estimation is presented. The method is applied to a class of problems in which SfM algorithms are utilized to form a geo-registered reference model of a particular ground area using imagery gathered during flight by a small unmanned aircraft. The model is then used to form camera pose estimates in near real-time from imagery gathered later. The resulting pose estimates can be utilized by any of the other onboard systems (e.g. as a replacement for GPS data) or downstream exploitation systems, e.g., image-based object trackers. However, many of the consumers of pose estimates require an assessment of the pose accuracy. The method for generating the accuracy assessment is presented. First, the uncertainty in the reference model is estimated. Bundle Adjustment (BA) is utilized for model generation. While the high-level approach for generating a covariance matrix of the BA parameters is straightforward, typical computing hardware is not able to support the required operations due to the scale of the optimization problem within BA. Therefore, a series of sparse matrix operations is utilized to form an exact covariance matrix for only the parameters that are needed at a particular moment. Once the uncertainty in the model has been determined, it is used to augment Perspective-n-Point pose estimation algorithms to improve the pose accuracy and to estimate the resulting pose uncertainty. The implementation of the described method is presented along with results including results gathered from flight test data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardcastle, Nicholas, E-mail: nick.hardcastle@gmail.com; Centre for Medical Radiation Physics, University of Wollongong, Wollongong; Hofman, Michael S.
2015-09-01
Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy andmore » correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration. Conclusions: DIR accuracy in the data sets studied was variable depending on anatomic changes resulting from radiation therapy; caution must be exercised when using DIR in regions of low contrast or radiation pneumonitis. Lung perfusion reduces with increasing radiation therapy dose; however, DIR did not translate into significant changes in dose–response assessment.« less
Pascoal, Lívia Maia; Lopes, Marcos Venícios de Oliveira; Chaves, Daniel Bruno Resende; Beltrão, Beatriz Amorim; da Silva, Viviane Martins; Monteiro, Flávia Paula Magalhães
2015-01-01
OBJECTIVE: to analyze the accuracy of the defining characteristics of the Impaired gas exchange nursing diagnosis in children with acute respiratory infection. METHOD: open prospective cohort study conducted with 136 children monitored for a consecutive period of at least six days and not more than ten days. An instrument based on the defining characteristics of the Impaired gas exchange diagnosis and on literature addressing pulmonary assessment was used to collect data. The accuracy means of all the defining characteristics under study were computed. RESULTS: the Impaired gas exchange diagnosis was present in 42.6% of the children in the first assessment. Hypoxemia was the characteristic that presented the best measures of accuracy. Abnormal breathing presented high sensitivity, while restlessness, cyanosis, and abnormal skin color showed high specificity. All the characteristics presented negative predictive values of 70% and cyanosis stood out by its high positive predictive value. CONCLUSION: hypoxemia was the defining characteristic that presented the best predictive ability to determine Impaired gas exchange. Studies of this nature enable nurses to minimize variability in clinical situations presented by the patient and to identify more precisely the nursing diagnosis that represents the patient's true clinical condition. PMID:26155010
Remans, Tony; Keunen, Els; Bex, Geert Jan; Smeets, Karen; Vangronsveld, Jaco; Cuypers, Ann
2014-10-01
Reverse transcription-quantitative PCR (RT-qPCR) has been widely adopted to measure differences in mRNA levels; however, biological and technical variation strongly affects the accuracy of the reported differences. RT-qPCR specialists have warned that, unless researchers minimize this variability, they may report inaccurate differences and draw incorrect biological conclusions. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines describe procedures for conducting and reporting RT-qPCR experiments. The MIQE guidelines enable others to judge the reliability of reported results; however, a recent literature survey found low adherence to these guidelines. Additionally, even experiments that use appropriate procedures remain subject to individual variation that statistical methods cannot correct. For example, since ideal reference genes do not exist, the widely used method of normalizing RT-qPCR data to reference genes generates background noise that affects the accuracy of measured changes in mRNA levels. However, current RT-qPCR data reporting styles ignore this source of variation. In this commentary, we direct researchers to appropriate procedures, outline a method to present the remaining uncertainty in data accuracy, and propose an intuitive way to select reference genes to minimize uncertainty. Reporting the uncertainty in data accuracy also serves for quality assessment, enabling researchers and peer reviewers to confidently evaluate the reliability of gene expression data. © 2014 American Society of Plant Biologists. All rights reserved.
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2014-01-01
Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289
Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.
2002-01-01
Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.
Registration of 3D spectral OCT volumes using 3D SIFT feature point matching
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan
2009-02-01
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
NASA Astrophysics Data System (ADS)
Tang, Xiaojing
Fast and accurate monitoring of tropical forest disturbance is essential for understanding current patterns of deforestation as well as helping eliminate illegal logging. This dissertation explores the use of data from different satellites for near real-time monitoring of forest disturbance in tropical forests, including: development of new monitoring methods; development of new assessment methods; and assessment of the performance and operational readiness of existing methods. Current methods for accuracy assessment of remote sensing products do not address the priority of near real-time monitoring of detecting disturbance events as early as possible. I introduce a new assessment framework for near real-time products that focuses on the timing and the minimum detectable size of disturbance events. The new framework reveals the relationship between change detection accuracy and the time needed to identify events. In regions that are frequently cloudy, near real-time monitoring using data from a single sensor is difficult. This study extends the work by Xin et al. (2013) and develops a new time series method (Fusion2) based on fusion of Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data. Results of three test sites in the Amazon Basin show that Fusion2 can detect 44.4% of the forest disturbance within 13 clear observations (82 days) after the initial disturbance. The smallest event detected by Fusion2 is 6.5 ha. Also, Fusion2 detects disturbance faster and has less commission error than more conventional methods. In a comparison of coarse resolution sensors, MODIS Terra and Aqua combined provides faster and more accurate detection of disturbance events than VIIRS (Visible Infrared Imaging Radiometer Suite) and MODIS single sensor data. The performance of near real-time monitoring using VIIRS is slightly worse than MODIS Terra but significantly better than MODIS Aqua. New monitoring methods developed in this dissertation provide forest protection organizations the capacity to monitor illegal logging events promptly. In the future, combining two Landsat and two Sentinel-2 satellites will provide global coverage at 30 m resolution every 4 days, and routine monitoring may be possible at high resolution. The methods and assessment framework developed in this dissertation are adaptable to newly available datasets.
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
Long, Stephen E; Catron, Brittany L; Boggs, Ashley Sp; Tai, Susan Sc; Wise, Stephen A
2016-09-01
The use of urinary iodine as an indicator of iodine status relies in part on the accuracy of the analytical measurement of iodine in urine. Likewise, the use of dietary iodine intake as an indicator of iodine status relies in part on the accuracy of the analytical measurement of iodine in dietary sources, including foods and dietary supplements. Similarly, the use of specific serum biomarkers of thyroid function to screen for both iodine deficiency and iodine excess relies in part on the accuracy of the analytical measurement of those biomarkers. The National Institute of Standards and Technology has been working with the NIH Office of Dietary Supplements for several years to develop higher-order reference measurement procedures and Standard Reference Materials to support the validation of new routine analytical methods for iodine in foods and dietary supplements, for urinary iodine, and for several serum biomarkers of thyroid function including thyroid-stimulating hormone, thyroglobulin, total and free thyroxine, and total and free triiodothyronine. These materials and methods have the potential to improve the assessment of iodine status and thyroid function in observational studies and clinical trials, thereby promoting public health efforts related to iodine nutrition. © 2016 American Society for Nutrition.
NASA Astrophysics Data System (ADS)
Gorash, Yevgen; Comlekci, Tugrul; MacKenzie, Donald
2017-05-01
This study investigates the effects of fatigue material data and finite element types on accuracy of residual life assessments under high cycle fatigue. The bending of cross-beam connections is simulated in ANSYS Workbench for different combinations of structural member shapes made of a typical structural steel. The stress analysis of weldments with specific dimensions and loading applied is implemented using solid and shell elements. The stress results are transferred to the fatigue code nCode DesignLife for the residual life prediction. Considering the effects of mean stress using FKM approach, bending and thickness according to BS 7608:2014, fatigue life is predicted using the Volvo method and stress integration rules from ASME Boiler & Pressure Vessel Code. Three different pairs of S-N curves are considered in this work including generic seam weld curves and curves for the equivalent Japanese steel JIS G3106-SM490B. The S-N curve parameters for the steel are identified using the experimental data available from NIMS fatigue data sheets employing least square method and considering thickness and mean stress corrections. The numerical predictions are compared to the available experimental results indicating the most preferable fatigue data input, range of applicability and FE-model formulation to achieve the best accuracy.
DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES
Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...
ROLE CONFUSION AND SELF ASSESSMENT IN INTERPROFESSIONAL TRAUMA TEAMS
Steinemann, Susan; Kurosawa, Gene; Wei, Alexander; Ho, Nina; Lim, Eunjung; Suares, Gregory; Bhatt, Ajay; Berg, Benjamin
2015-01-01
Background Trauma care requires coordinating an interprofessional team, with formative feedback on teamwork skills. We hypothesized nurses and surgeons have different perceptions regarding roles during resuscitation; that nurses’ teamwork self-assessment differs from experts’, and that video debriefing might improve accuracy of self-assessment. Methods Trauma nurses and surgeons were surveyed regarding resuscitation responsibilities. Subsequently, nurses joined interprofessional teams in simulated trauma resuscitations. Following each resuscitation, nurses and teamwork experts independently scored teamwork (T-NOTECHS). After video debriefing, nurses repeated T-NOTECHS self-assessment. Results Nurses and surgeons assumed significantly more responsibility by their own profession for 71% of resuscitation tasks. Nurses’ overall T-NOTECHS ratings were slightly higher than experts’. This was evident in all T-NOTECHS subdomains except “leadership,” but despite statistical significance the difference was small and clinically irrelevant. Video debriefing did not improve the accuracy of self-assessment. Conclusions Nurses and physicians demonstrated discordant perceptions of responsibilities. Nurses’ self-assessment of teamwork was statistically, but not clinically significantly, higher than experts’ in all domains except physician leadership. PMID:26801092
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Liu, Xiaodong; Luo, Hong
2015-06-01
Here, a space and time third-order discontinuous Galerkin method based on a Hermite weighted essentially non-oscillatory reconstruction is presented for the unsteady compressible Euler and Navier–Stokes equations. At each time step, a lower-upper symmetric Gauss–Seidel preconditioned generalized minimal residual solver is used to solve the systems of linear equations arising from an explicit first stage, single diagonal coefficient, diagonally implicit Runge–Kutta time integration scheme. The performance of the developed method is assessed through a variety of unsteady flow problems. Numerical results indicate that this method is able to deliver the designed third-order accuracy of convergence in both space and time,more » while requiring remarkably less storage than the standard third-order discontinous Galerkin methods, and less computing time than the lower-order discontinous Galerkin methods to achieve the same level of temporal accuracy for computing unsteady flow problems.« less
A comparison of matrix methods for calculating eigenvalues in acoustically lined ducts
NASA Technical Reports Server (NTRS)
Watson, W.; Lansing, D. L.
1976-01-01
Three approximate methods - finite differences, weighted residuals, and finite elements - were used to solve the eigenvalue problem which arises in finding the acoustic modes and propagation constants in an absorptively lined two-dimensional duct without airflow. The matrix equations derived for each of these methods were solved for the eigenvalues corresponding to various values of wall impedance. Two matrix orders, 20 x 20 and 40 x 40, were used. The cases considered included values of wall admittance for which exact eigenvalues were known and for which several nearly equal roots were present. Ten of the lower order eigenvalues obtained from the three approximate methods were compared with solutions calculated from the exact characteristic equation in order to make an assessment of the relative accuracy and reliability of the three methods. The best results were given by the finite element method using a cubic polynomial. Excellent accuracy was consistently obtained, even for nearly equal eigenvalues, by using a 20 x 20 order matrix.