Accuracy of a class of concurrent algorithms for transient finite element analysis
NASA Technical Reports Server (NTRS)
Ortiz, Michael; Sotelino, Elisa D.; Nour-Omid, Bahram
1988-01-01
The accuracy of a new class of concurrent procedures for transient finite element analysis is examined. A phase error analysis is carried out which shows that wave retardation leading to unacceptable loss of accuracy may occur if a Courant condition based on the dimensions of the subdomains is violated. Numerical tests suggest that this Courant condition is conservative for typical structural applications and may lead to a marked increase in accuracy as the number of subdomains is increased. Theoretical speed-up ratios are derived which suggest that the algorithms under consideration can be expected to exhibit a performance superior to that of globally implicit methods when implemented on parallel machines.
Lohmann, Philipp; Stoffels, Gabriele; Ceccon, Garry; Rapp, Marion; Sabel, Michael; Filss, Christian P; Kamp, Marcel A; Stegmayr, Carina; Neumaier, Bernd; Shah, Nadim J; Langen, Karl-Josef; Galldiks, Norbert
2017-07-01
We investigated the potential of textural feature analysis of O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic 18 F-FET PET. Tumour-to-brain ratios (TBRs) of 18 F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR mean alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR max alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR max . Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic 18 F-FET PET scans. • Textural feature analysis provides quantitative information about tumour heterogeneity • Textural features help improve discrimination between brain metastasis recurrence and radiation injury • Textural features might be helpful to further understand tumour heterogeneity • Analysis does not require a more time consuming dynamic PET acquisition.
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
The Meta-Analysis of Clinical Judgment Project: Effects of Experience on Judgment Accuracy
ERIC Educational Resources Information Center
Spengler, Paul M.; White, Michael J.; Aegisdottir, Stefania; Maugherman, Alan S.; Anderson, Linda A.; Cook, Robert S.; Nichols, Cassandra N.; Lampropoulos, Georgios K.; Walker, Blain S.; Cohen, Genna R.; Rush, Jeffrey D.
2009-01-01
Clinical and educational experience is one of the most commonly studied variables in clinical judgment research. Contrary to clinicians' perceptions, clinical judgment researchers have generally concluded that accuracy does not improve with increased education, training, or clinical experience. In this meta-analysis, the authors synthesized…
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.
Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M
2016-01-01
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study
Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.
2016-01-01
Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917
Thomas, Richard M; Parks, Connie L; Richard, Adam H
2016-09-01
A common task in forensic anthropology involves the estimation of the biological sex of a decedent by exploiting the sexual dimorphism between males and females. Estimation methods are often based on analysis of skeletal collections of known sex and most include a research-based accuracy rate. However, the accuracy rates of sex estimation methods in actual forensic casework have rarely been studied. This article uses sex determinations based on DNA results from 360 forensic cases to develop accuracy rates for sex estimations conducted by forensic anthropologists. The overall rate of correct sex estimation from these cases is 94.7% with increasing accuracy rates as more skeletal material is available for analysis and as the education level and certification of the examiner increases. Nine of 19 incorrect assessments resulted from cases in which one skeletal element was available, suggesting that the use of an "undetermined" result may be more appropriate for these cases. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
Accuracy of active chirp linearization for broadband frequency modulated continuous wave ladar.
Barber, Zeb W; Babbitt, Wm Randall; Kaylor, Brant; Reibel, Randy R; Roos, Peter A
2010-01-10
As the bandwidth and linearity of frequency modulated continuous wave chirp ladar increase, the resulting range resolution, precisions, and accuracy are improved correspondingly. An analysis of a very broadband (several THz) and linear (<1 ppm) chirped ladar system based on active chirp linearization is presented. Residual chirp nonlinearity and material dispersion are analyzed as to their effect on the dynamic range, precision, and accuracy of the system. Measurement precision and accuracy approaching the part per billion level is predicted.
NASA Astrophysics Data System (ADS)
Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei
2017-10-01
In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study
Quantitative analysis of peel-off degree for printed electronics
NASA Astrophysics Data System (ADS)
Park, Janghoon; Lee, Jongsu; Sung, Ki-Hak; Shin, Kee-Hyun; Kang, Hyunkyoo
2018-02-01
We suggest a facile methodology of peel-off degree evaluation by image processing on printed electronics. The quantification of peeled and printed areas was performed using open source programs. To verify the accuracy of methods, we manually removed areas from the printed circuit that was measured, resulting in 96.3% accuracy. The sintered patterns showed a decreasing tendency in accordance with the increase in the energy density of an infrared lamp, and the peel-off degree increased. Thus, the comparison between both results was presented. Finally, the correlation between performance characteristics was determined by quantitative analysis.
Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne
2016-01-01
Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Research on effect of rough surface on FMCW laser radar range accuracy
NASA Astrophysics Data System (ADS)
Tao, Huirong
2018-03-01
The non-cooperative targets large scale measurement system based on frequency-modulated continuous-wave (FMCW) laser detection and ranging technology has broad application prospects. It is easy to automate measurement without cooperative targets. However, the complexity and diversity of the surface characteristics of the measured surface directly affects the measurement accuracy. First, the theoretical analysis of range accuracy for a FMCW laser radar was studied, the relationship between surface reflectivity and accuracy was obtained. Then, to verify the effect of surface reflectance for ranging accuracy, a standard tool ball and three standard roughness samples were measured within 7 m to 24 m. The uncertainty of each target was obtained. The results show that the measurement accuracy is found to increase as the surface reflectivity gets larger. Good agreements were obtained between theoretical analysis and measurements from rough surfaces. Otherwise, when the laser spot diameter is smaller than the surface correlation length, a multi-point averaged measurement can reduce the measurement uncertainty. The experimental results show that this method is feasible.
NASA Technical Reports Server (NTRS)
Smith, D. R.
1982-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a Barness-type scheme for the analysis of surface meteorological data. Modifications are introduced to the original version in order to increase its flexibility and to permit greater ease of usage. The code was rewritten for an interactive computer environment. Furthermore, a multiple iteration technique suggested by Barnes was implemented for greater accuracy. PROAM was subjected to a series of experiments in order to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution in order to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple iteration technique increases the accuracy of the analysis. Furthermore, the tests verify appropriate values for the analysis parameters in resolving meso-beta scale phenomena.
McMorris, Terry; Sproule, John; Turner, Anthony; Hale, Beverley J
2011-03-01
The purpose of this study was to compare, using meta-analytic techniques, the effect of acute, intermediate intensity exercise on the speed and accuracy of performance of working memory tasks. It was hypothesized that acute, intermediate intensity exercise would have a significant beneficial effect on response time and that effect sizes for response time and accuracy data would differ significantly. Random-effects meta-analysis showed a significant, beneficial effect size for response time, g=-1.41 (p<0.001) but a significant detrimental effect size, g=0.40 (p<0.01), for accuracy. There was a significant difference between effect sizes (Z(diff)=3.85, p<0.001). It was concluded that acute, intermediate intensity exercise has a strong beneficial effect on speed of response in working memory tasks but a low to moderate, detrimental one on accuracy. There was no support for a speed-accuracy trade-off. It was argued that exercise-induced increases in brain concentrations of catecholamines result in faster processing but increases in neural noise may negatively affect accuracy. 2010 Elsevier Inc. All rights reserved.
Computer Surveillance of Hospital-Acquired Infections: A 25 year Update
Evans, R. Scott; Abouzelof, Rouett H.; Taylor, Caroline W.; Anderson, Vickie; Sumner, Sharon; Soutter, Sharon; Kleckner, Ruth; Lloyd, James F.
2009-01-01
Hospital-acquired infections (HAIs) are a significant cause of patient harm and increased healthcare cost. Many states have instituted mandatory hospital-wide reporting of HAIs which will increase the workload of infection preventionists and the Center for Medicare and Medicaid Services is no longer paying hospitals to treat certain HAIs. These competing priorities for increased reporting and prevention have many hospitals worried. Manual surveillance of HAIs cannot provide the speed, accuracy and consistency of computerized surveillance. Computer tools can also improve the speed and accuracy of HAI analysis and reporting. Computerized surveillance for HAIs was implemented at LDS Hospital in 1984, but that system required manual entry of data for analysis and reporting. This paper reports on the current functionality and status of the updated computer system for HAI surveillance, analysis and reporting used at LDS Hospital and the 21 other Intermountain Healthcare hospitals. PMID:20351845
Camera system considerations for geomorphic applications of SfM photogrammetry
Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John
2017-01-01
The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
Hyperspectral analysis of seagrass in Redfish Bay, Texas
NASA Astrophysics Data System (ADS)
Wood, John S.
Remote sensing using multi- and hyperspectral imaging and analysis has been used in resource management for quite some time, and for a variety of purposes. In the studies to follow, hyperspectral imagery of Redfish Bay is used to discriminate between species of seagrasses found below the water surface. Water attenuates and reflects light and energy from the electromagnetic spectrum, and as a result, subsurface analysis can be more complex than that performed in the terrestrial world. In the following studies, an iterative process is developed, using ENVI image processing software and ArcGIS software. Band selection was based on recommendations developed empirically in conjunction with ongoing research into depth corrections, which were applied to the imagery bands (a default depth of 65 cm was used). Polygons generated, classified and aggregated within ENVI are reclassified in ArcGIS using field site data that was randomly selected for that purpose. After the first iteration, polygons that remain classified as 'Mixed' are subjected to another iteration of classification in ENVI, then brought into ArcGIS and reclassified. Finally, when that classification scheme is exhausted, a supervised classification is performed, using a 'Maximum Likelihood' classification technique, which assigned the remaining polygons to the classification that was most like the training polygons, by digital number value. Producer's Accuracy by classification ranged from 23.33 % for the 'MixedMono' class to 66.67% for the 'Bare' class; User's Accuracy by classification ranged from 22.58% for the 'MixedMono' class to 69.57% for the 'Bare' classification. An overall accuracy of 37.93% was achieved. Producers and Users Accuracies for Halodule were 29% and 39%, respectively; for Thalassia, they were 46% and 40%. Cohen's Kappa Coefficient was calculated at .2988. We then returned to the field and collected spectral signatures of monotypic stands of seagrass at varying depths and at three sensor levels: above the water surface, just below the air/water interface, and at the canopy position, when it differed from the subsurface position. Analysis of plots of these spectral curves, after applying depth corrections and Multiplicative Scatter Correction, indicates that there are detectable spectral differences between Halodule and Thalassia species at all three positions. Further analysis indicated that only above-surface spectral signals could reliably be used to discriminate between species, because there was an overlap of the standard deviations in the other two positions. A recommendation for wavelengths that would produce increased accuracy in hyperspectral image analysis was made, based on areas where there is a significant amount of difference between the mean spectral signatures, and no overlap of the standard deviations in our samples. The original hyperspectral imagery was reprocessed, using the bands recommended from the research above (approximately 535, 600, 620, 638, and 656 nm). A depth raster was developed from various available sources, which was resampled and reclassified to reflect values for water absorption and water scattering, which were then applied to each band using the depth correction algorithm. Processing followed the iterative classification methods described above. Accuracy for this round of processing improved; overall accuracy increased from 38% to 57%. Improvements were noted in Producer's Accuracy, with the 'Bare' vi classification increasing from 67% to 73%, Halodule increasing from 29% to 63%, Thalassia increasing slightly, from 46% to 50%, and 'MixedMono' improving from 23% to 42%. User's Accuracy also improved, with the 'Bare' class increasing from 69% to 70%, Halodule increasing from 39% to 67%, Thalassia increasing from 40% to 7%, and 'MixedMono' increasing from 22.5% to 35%. A very recent report shows the mean percent cover of seagrasses in Redfish Bay and Corpus Christi Bay combined for all species at 68.6%, and individually by species: Halodule 39.8%, Thalassia 23.7%, Syringodium 4%, Ruppia 1% and Halophila 0.1%. Our study classifies 15% as 'Bare', 23% Halodule, 18% Thalassia, and 2% Ruppia. In addition, we classify 5% as 'Mixed', 22% as 'MixedMono', 12% as 'Bare/Halodule Mix', and 3% 'Bare/Thalassia Mix'. Aggregating the 'Bare' and 'Bare/species' classes would equate to approximately 30%, very close to what this new study produces. Other classes are quite similar, when considering that their study includes no 'Mixed' classifications. This series of research studies illustrates the application and utility of hyperspectral imagery and associated processing to mapping shallow benthic habitats. It also demonstrates that the technology is rapidly changing and adapting, which will lead to even further increases in accuracy. Future studies with hyperspectral imaging should include extensive spectral field collection, and the application of a depth correction.
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kurniawan, Dian; Suparti; Sugito
2018-05-01
Population growth in Indonesia has increased every year. According to the population census conducted by the Central Bureau of Statistics (BPS) in 2010, the population of Indonesia has reached 237.6 million people. Therefore, to control the population growth rate, the government hold Family Planning or Keluarga Berencana (KB) program for couples of childbearing age. The purpose of this program is to improve the health of mothers and children in order to manifest prosperous society by controlling births while ensuring control of population growth. The data used in this study is the updated family data of Semarang city in 2016 that conducted by National Family Planning Coordinating Board (BKKBN). From these data, classifiers with kernel discriminant analysis will be obtained, and also classification accuracy will be obtained from that method. The result of the analysis showed that normal kernel discriminant analysis gives 71.05 % classification accuracy with 28.95 % classification error. Whereas triweight kernel discriminant analysis gives 73.68 % classification accuracy with 26.32 % classification error. Using triweight kernel discriminant for data preprocessing of family planning participation of childbearing age couples in Semarang City of 2016 can be stated better than with normal kernel discriminant.
2015-12-01
other parameters match the previous simulation. A third simulation was performed to evaluate the effect of gradient and RF spoiling on the accuracy of...this increase also offers an opportunity to increase the length of the spoiler gradient and improve the accuracy of FA quanti - fication (27). To...Relaxation Pouria Mossahebi,1 Vasily L. Yarnykh,2 and Alexey Samsonov3* Purpose: Cross-relaxation imaging (CRI) is a family of quanti - tative
Memon, S; Lynch, A C; Bressel, M; Wise, A G; Heriot, A G
2015-09-01
Restaging imaging by MRI or endorectal ultrasound (ERUS) following neoadjuvant chemoradiotherapy is not routinely performed, but the assessment of response is becoming increasingly important to facilitate individualization of management. A search of the MEDLINE and Scopus databases was performed for studies that evaluated the accuracy of restaging of rectal cancer following neoadjuvant chemoradiotherapy with MRI or ERUS against the histopathological outcome. A systematic review of selected studies was performed. The methodological quality of studies that qualified for meta-analysis was critically assessed to identify studies suitable for inclusion in the meta-analysis. Sixty-three articles were included in the systematic review. Twelve restaging MRI studies and 18 restaging ERUS studies were eligible for meta-analysis of T-stage restaging accuracy. Overall, ERUS T-stage restaging accuracy (mean [95% CI]: 65% [56-72%]) was nonsignificantly higher than MRI T-stage accuracy (52% [44-59%]). Restaging MRI is accurate at excluding circumferential resection margin involvement. Restaging MRI and ERUS were equivalent for prediction of nodal status: the accuracy of both investigations was 72% with over-staging and under-staging occurring in 10-15%. The heterogeneity amongst restaging studies is high, limiting conclusive findings regarding their accuracies. The accuracy of restaging imaging is different for different pathological T stages and highest for T3 tumours. Morphological assessment of T- or N-stage by MRI or ERUS is currently not accurate or consistent enough for clinical application. Restaging MRI appears to have a role in excluding circumferential resection margin involvement. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
EEG channels reduction using PCA to increase XGBoost's accuracy for stroke detection
NASA Astrophysics Data System (ADS)
Fitriah, N.; Wijaya, S. K.; Fanany, M. I.; Badri, C.; Rezal, M.
2017-07-01
In Indonesia, based on the result of Basic Health Research 2013, the number of stroke patients had increased from 8.3 ‰ (2007) to 12.1 ‰ (2013). These days, some researchers are using electroencephalography (EEG) result as another option to detect the stroke disease besides CT Scan image as the gold standard. A previous study on the data of stroke and healthy patients in National Brain Center Hospital (RS PON) used Brain Symmetry Index (BSI), Delta-Alpha Ratio (DAR), and Delta-Theta-Alpha-Beta Ratio (DTABR) as the features for classification by an Extreme Learning Machine (ELM). The study got 85% accuracy with sensitivity above 86 % for acute ischemic stroke detection. Using EEG data means dealing with many data dimensions, and it can reduce the accuracy of classifier (the curse of dimensionality). Principal Component Analysis (PCA) could reduce dimensionality and computation cost without decreasing classification accuracy. XGBoost, as the scalable tree boosting classifier, can solve real-world scale problems (Higgs Boson and Allstate dataset) with using a minimal amount of resources. This paper reuses the same data from RS PON and features from previous research, preprocessed with PCA and classified with XGBoost, to increase the accuracy with fewer electrodes. The specific fewer electrodes improved the accuracy of stroke detection. Our future work will examine the other algorithm besides PCA to get higher accuracy with less number of channels.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F
2013-07-29
To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arimura, Hidetaka, E-mail: arimurah@med.kyushu-u.ac.jp; Kamezawa, Hidemi; Jin, Ze
Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.
Identification accuracy of children versus adults: a meta-analysis.
Pozzulo, J D; Lindsay, R C
1998-10-01
Identification accuracy of children and adults was examined in a meta-analysis. Preschoolers (M = 4 years) were less likely than adults to make correct identifications. Children over the age of 5 did not differ significantly from adults with regard to correct identification rate. Children of all ages examined were less likely than adults to correctly reject a target-absent lineup. Even adolescents (M = 12-13 years) did not reach an adult rate of correct rejection. Compared to simultaneous lineup presentation, sequential lineups increased the child-adult gap for correct rejections. Providing child witnesses with identification practice or training did not increase their correct rejection rates. Suggestions for children's inability to correctly reject target-absent lineups are discussed. Future directions for identification research are presented.
Shear Recovery Accuracy in Weak-Lensing Analysis with the Elliptical Gauss-Laguerre Method
NASA Astrophysics Data System (ADS)
Nakajima, Reiko; Bernstein, Gary
2007-04-01
We implement the elliptical Gauss-Laguerre (EGL) galaxy-shape measurement method proposed by Bernstein & Jarvis and quantify the shear recovery accuracy in weak-lensing analysis. This method uses a deconvolution fitting scheme to remove the effects of the point-spread function (PSF). The test simulates >107 noisy galaxy images convolved with anisotropic PSFs and attempts to recover an input shear. The tests are designed to be immune to statistical (random) distributions of shapes, selection biases, and crowding, in order to test more rigorously the effects of detection significance (signal-to-noise ratio [S/N]), PSF, and galaxy resolution. The systematic error in shear recovery is divided into two classes, calibration (multiplicative) and additive, with the latter arising from PSF anisotropy. At S/N > 50, the deconvolution method measures the galaxy shape and input shear to ~1% multiplicative accuracy and suppresses >99% of the PSF anisotropy. These systematic errors increase to ~4% for the worst conditions, with poorly resolved galaxies at S/N simeq 20. The EGL weak-lensing analysis has the best demonstrated accuracy to date, sufficient for the next generation of weak-lensing surveys.
Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.
2014-01-01
In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
NASA Astrophysics Data System (ADS)
Tavakoli, Vahid; Stoddard, Marcus F.; Amini, Amir A.
2013-03-01
Quantitative motion analysis of echocardiographic images helps clinicians with the diagnosis and therapy of patients suffering from cardiac disease. Quantitative analysis is usually based on TDI (Tissue Doppler Imaging) or speckle tracking. These methods are based on two independent techniques - the Doppler Effect and image registration, respectively. In order to increase the accuracy of the speckle tracking technique and cope with the angle dependency of TDI, herein, a combined approach dubbed TDIOF (Tissue Doppler Imaging Optical Flow) is proposed. TDIOF is formulated based on the combination of B-mode and Doppler energy terms in an optical flow framework and minimized using algebraic equations. In this paper, we report on validations with simulated, physical cardiac phantom, and in-vivo patient data. It is shown that the additional Doppler term is able to increase the accuracy of speckle tracking, the basis for several commercially available echocardiography analysis techniques.
Trend analysis of the aerosol optical depth from fusion of MISR and MODIS retrievals over China
NASA Astrophysics Data System (ADS)
Guo, Jing; Gu, Xingfa; Yu, Tao; Cheng, Tianhai; Chen, Hao
2014-03-01
Atmospheric aerosol plays an important role in the climate change though direct and indirect processes. In order to evaluate the effects of aerosols on climate, it is necessary to have a research on their spatial and temporal distributions. Satellite aerosol remote sensing is a developing technology that may provide good temporal sampling and superior spatial coverage to study aerosols. The Moderate Resolution Imaging Spectroradiometer (MODIS) and Multi-angle Imaging Spectroradiometer (MISR) have provided aerosol observations since 2000, with large coverage and high accuracy. However, due to the complex surface, cloud contamination, and aerosol models used in the retrieving process, the uncertainties still exist in current satellite aerosol products. There are several observed differences in comparing the MISR and MODIS AOD data with the AERONET AOD. Combing multiple sensors could reduce uncertainties and improve observational accuracy. The validation results reveal that a better agreement between fusion AOD and AERONET AOD. The results confirm that the fusion AOD values are more accurate than single sensor. We have researched the trend analysis of the aerosol properties over China based on nine-year (2002-2010) fusion data. Compared with trend analysis in Jingjintang and Yangtze River Delta, the accuracy has increased by 5% and 3%, respectively. It is obvious that the increasing trend of the AOD occurred in Yangtze River Delta, where human activities may be the main source of the increasing AOD.
Assessing map accuracy in a remotely sensed, ecoregion-scale cover map
Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.
1998-01-01
Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.
Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui
2018-06-01
Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.
Reporting Data with "Over-the-Counter" Data Analysis Supports Increases Educators' Analysis Accuracy
ERIC Educational Resources Information Center
Rankin, Jenny Grant
2013-01-01
There is extensive research on the benefits of making data-informed decisions to improve learning, but these benefits rely on the data being effectively interpreted. Despite educators' above-average intellect and education levels, there is evidence many educators routinely misinterpret student data. Data analysis problems persist even at districts…
Cohen, Wayne R; Hayes-Gill, Barrie
2014-06-01
To evaluate the performance of external electronic fetal heart rate and uterine contraction monitoring according to maternal body mass index. Secondary analysis of prospective equivalence study. Three US urban teaching hospitals. Seventy-four parturients with a normal term pregnancy. The parent study assessed performance of two methods of external fetal heart rate monitoring (abdominal fetal electrocardiogram and Doppler ultrasound) and of uterine contraction monitoring (electrohystero-graphy and tocodynamometry) compared with internal monitoring with fetal scalp electrode and intrauterine pressure transducer. Reliability of external techniques was assessed by the success rate and positive percent agreement with internal methods. Bland-Altman analysis determined accuracy. We analyzed data from that study according to maternal body mass index. We assessed the relationship between body mass index and monitor performance with linear regression, using body mass index as the independent variable and measures of reliability and accuracy as dependent variables. There was no significant association between maternal body mass index and any measure of reliability or accuracy for abdominal fetal electrocardiogram. By contrast, the overall positive percent agreement for Doppler ultrasound declined (p = 0.042), and the root mean square error from the Bland-Altman analysis increased in the first stage (p = 0.029) with increasing body mass index. Uterine contraction recordings from electrohysterography and tocodynamometry showed no significant deterioration related to maternal body mass index. Accuracy and reliability of fetal heart rate monitoring using abdominal fetal electrocardiogram was unaffected by maternal obesity, whereas performance of ultrasound degraded directly with maternal size. Both electrohysterography and tocodynamometry were unperturbed by obesity. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.
Farinati, F; Cardin, F; Di Mario, F; Sava, G A; Piccoli, A; Costa, F; Penon, G; Naccarato, R
1987-08-01
The endoscopic diagnosis of chronic atrophic gastritis is often underestimated, and most of the procedures adopted to increase diagnostic accuracy are time consuming and complex. In this study, we evaluated the usefulness of the determination of gastric juice pH by means of litmus paper. Values obtained by this method correlate well with gastric acid secretory capacity as measured by gastric acid analysis (r = -0.64, p less than 0.001) and are not affected by the presence of bile. Gastric juice pH determination increases sensitivity and other diagnostic parameters such as performance index (Youden J test), positive predictive value, and post-test probability difference by 50%. Furthermore, the negative predictive value is very high, the probability of missing a patient with chronic atrophic gastritis with this simple method being 2% for fundic and 15% for antral atrophic change. We conclude that gastric juice pH determination, which substantially increases diagnostic accuracy and is very simple to perform, should be routinely adopted.
The effect of MLC speed and acceleration on the plan delivery accuracy of VMAT
Park, J M; Wu, H-G; Kim, J H; Carlson, J N K
2015-01-01
Objective: To determine a new metric utilizing multileaf collimator (MLC) speeds and accelerations to predict plan delivery accuracy of volumetric modulated arc therapy (VMAT). Methods: To verify VMAT delivery accuracy, gamma evaluations, analysis of mechanical parameter difference between plans and log files, and analysis of changes in dose–volumetric parameters between plans and plans reconstructed with log files were performed with 40 VMAT plans. The average proportion of leaf speeds ranging from l to h cm s−1 (Sl–h and l–h = 0–0.4, 0.4–0.8, 0.8–1.2, 1.2–1.6 and 1.6–2.0), mean and standard deviation of MLC speeds were calculated for each VMAT plan. The same was carried out for accelerations in centimetre per second squared (Al–h and l–h = 0–4, 4–8, 8–12, 12–16 and 16–20). The correlations of those indicators to plan delivery accuracy were analysed with Spearman's correlation coefficient (rs). Results: The S1.2–1.6 and mean acceleration of MLCs showed generally higher correlations to plan delivery accuracy than did others. The highest rs values were observed between S1.2–1.6 and global 1%/2 mm (rs = −0.698 with p < 0.001) as well as mean acceleration and global 1%/2 mm (rs = −0.650 with p < 0.001). As the proportion of MLC speeds and accelerations >0.4 and 4 cm s−2 increased, the plan delivery accuracy of VMAT decreased. Conclusion: The variations in MLC speeds and accelerations showed considerable correlations to VMAT delivery accuracy. Advances in knowledge: As the MLC speeds and accelerations increased, VMAT delivery accuracy reduced. PMID:25734490
Meta-analysis of diagnostic accuracy studies in mental health
Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J
2015-01-01
Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042
Pizones, Javier; Sánchez-Mariscal, Felisa; Zúñiga, Lorenzo; Álvarez, Patricia; Izquierdo, Enrique
2013-04-20
Prospective cohort study. To study magnetic resonance imaging (MRI) accuracy in diagnosing posterior ligamentous complex (PLC) damage, when applying the new dichotomic instability criteria in a prospective cohort of patients with vertebral fracture. Recent studies dispute MRI accuracy to diagnose PLC injuries. They analyze the complex based on 3 categories (intact/indeterminate/rupture), including the indeterminate in the ruptured group (measurement bias) in the accuracy analysis. Moreover, fractures with conservative treatment (selection bias) are not included. Both facts reduce the specificity. A recent study has proposed new criteria where posterior instability is determined with supraspinous ligament (SSL) rupture. Prospective study of patients with acute thoracolumbar fracture, using radiography and MRI (FS-T2-w/short-tau inversion-recovery sequences). 1. The integrity (ruptured/unruptured) of each isolated component of the PLC (facet capsules, interspinous ligament, SSL, and ligamentum flavum) was assessed via MRI and surgical findings. 2. PLC integrity as a whole was assessed, adopting the new dichotomic stability criteria from previous studies. In the MR images, PLC is considered ruptured when the SSL is found discontinued, and intact when not (this excludes the "indeterminate" category). In surgically treated fractures, PLC stability as a whole was assessed dynamically (ruptured/unruptured). In conservative fractures, PLC stability was assessed according to change in vertebral kyphosis measured with the local kyphotic angle at 2-year follow-up (ruptured if difference is > 5°/unruptured if difference is < 5°).3. Comparative analysis among findings provided MRI accuracy in diagnosing PLC damage. Fifty-eight vertebral fractures were studied (38 surgical, 20 conservative), of which 50% were in males; average age, 40.4 years. MRI sensitivity for injury diagnosis of each isolated PLC component varied between 92.3% (interspinous ligament) and 100% (ligamentum flavum). Specificity varied between 52% (facet capsules) and 100% (SSL). PLC integrity sensitivity and specificity as a whole were 91% and 100%, respectively. Adopting the new stability criteria, MRI accuracy in PLC injury diagnosis increases. Specificity is increased (true positives) both in isolated component analysis and PLC as a whole.
Application of mixsep software package: Performance verification of male-mixed DNA analysis
HU, NA; CONG, BIN; GAO, TAO; CHEN, YU; SHEN, JUNYI; LI, SHUJIN; MA, CHUNLING
2015-01-01
An experimental model of male-mixed DNA (n=297) was constructed according to the mixed DNA construction principle. This comprised the use of the Applied Biosystems (ABI) 7500 quantitative polymerase chain reaction system, with scientific validation of mixture proportion (Mx; root-mean-square error ≤0.02). Statistical analysis was performed on locus separation accuracy using mixsep, a DNA mixture separation R-package, and the analytical performance of mixsep was assessed by examining the data distribution pattern of different mixed gradients, short tandem repeat (STR) loci and mixed DNA types. The results showed that locus separation accuracy had a negative linear correlation with the mixed gradient (R2=−0.7121). With increasing mixed gradient imbalance, locus separation accuracy first increased and then decreased, with the highest value detected at a gradient of 1:3 (≥90%). The mixed gradient, which is the theoretical Mx, was one of the primary factors that influenced the success of mixed DNA analysis. Among the 16 STR loci detected by Identifiler®, the separation accuracy was relatively high (>88%) for loci D5S818, D8S1179 and FGA, whereas the median separation accuracy value was lowest for the D7S820 locus. STR loci with relatively large numbers of allelic drop-out (ADO; >15) were all located in the yellow and red channels, including loci D18S51, D19S433, FGA, TPOX and vWA. These five loci featured low allele peak heights, which was consistent with the low sensitivity of the ABI 3130xl Genetic Analyzer to yellow and red fluorescence. The locus separation accuracy of the mixsep package was substantially different with and without the inclusion of ADO loci; inclusion of ADO significantly reduced the analytical performance of the mixsep package, which was consistent with the lack of an ADO functional module in this software. The present study demonstrated that the mixsep software had a number of advantages and was recommended for analysis of mixed DNA. This software was easy to operate and produced understandable results with a degree of controllability. PMID:25936428
Mapping of land cover in northern California with simulated hyperspectral satellite imagery
NASA Astrophysics Data System (ADS)
Clark, Matthew L.; Kilham, Nina E.
2016-09-01
Land-cover maps are important science products needed for natural resource and ecosystem service management, biodiversity conservation planning, and assessing human-induced and natural drivers of land change. Analysis of hyperspectral, or imaging spectrometer, imagery has shown an impressive capacity to map a wide range of natural and anthropogenic land cover. Applications have been mostly with single-date imagery from relatively small spatial extents. Future hyperspectral satellites will provide imagery at greater spatial and temporal scales, and there is a need to assess techniques for mapping land cover with these data. Here we used simulated multi-temporal HyspIRI satellite imagery over a 30,000 km2 area in the San Francisco Bay Area, California to assess its capabilities for mapping classes defined by the international Land Cover Classification System (LCCS). We employed a mapping methodology and analysis framework that is applicable to regional and global scales. We used the Random Forests classifier with three sets of predictor variables (reflectance, MNF, hyperspectral metrics), two temporal resolutions (summer, spring-summer-fall), two sample scales (pixel, polygon) and two levels of classification complexity (12, 20 classes). Hyperspectral metrics provided a 16.4-21.8% and 3.1-6.7% increase in overall accuracy relative to MNF and reflectance bands, respectively, depending on pixel or polygon scales of analysis. Multi-temporal metrics improved overall accuracy by 0.9-3.1% over summer metrics, yet increases were only significant at the pixel scale of analysis. Overall accuracy at pixel scales was 72.2% (Kappa 0.70) with three seasons of metrics. Anthropogenic and homogenous natural vegetation classes had relatively high confidence and producer and user accuracies were over 70%; in comparison, woodland and forest classes had considerable confusion. We next focused on plant functional types with relatively pure spectra by removing open-canopy shrublands, woodlands and mixed forests from the classification. This 12-class map had significantly improved accuracy of 85.1% (Kappa 0.83) and most classes had over 70% producer and user accuracies. Finally, we summarized important metrics from the multi-temporal Random Forests to infer the underlying chemical and structural properties that best discriminated our land-cover classes across seasons.
NASA Technical Reports Server (NTRS)
Orme, John S.; Schkolnik, Gerard S.
1995-01-01
Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.
Avila, Jacob; Smith, Ben; Mead, Therese; Jurma, Duane; Dawson, Matthew; Mallin, Michael; Dugan, Adam
2018-04-24
It is unknown whether the addition of M-mode to B-mode ultrasound (US) has any effect on the overall accuracy of interpretation of lung sliding in the evaluation of a pneumothorax by emergency physicians. This study aimed to determine what effect, if any, this addition has on US interpretation by emergency physicians of varying training levels. One hundred forty emergency physicians were randomized via online software to receive a quiz with B-mode clips alone or B-mode with corresponding M-mode images and asked to identify the presence or absence of lung sliding. The sensitivity, specificity, and accuracy of the diagnosis of lung sliding with and without M-mode US were compared. Overall, the sensitivities, specificities, and accuracies of B-mode + M-mode US versus B-mode US alone were 93.1% and 93.2% (P = .8), 96.0% and 89.8% (P < .0001), and 91.5% and 94.5% (P = .0091), respectively. A subgroup analysis showed that in those providers with fewer than 250 total US scans done previously, M-mode US increased accuracy from 88.2% (95% confidence interval, 86.2%-90.2%) to 94.4% (92.8%-96.0%; P = .001) and increased the specificity from 87.0% (84.5%-89.5%) to 97.2% (95.4%-99.0%; P < .0001) compared with B-mode US alone. There was no statistically significant difference observed in the sensitivity, specificity, and accuracy of B-mode + M-mode US compared with B-mode US alone in those with more than 250 scans. The addition of M-mode images to B-mode clips aids in the accurate diagnosis of lung sliding by emergency physicians. The subgroup analysis showed that the benefit of M-mode US disappears after emergency physicians have performed more than 250 US examinations. © 2018 by the American Institute of Ultrasound in Medicine.
Digital image analysis in pathology: benefits and obligation.
Laurinavicius, Arvydas; Laurinaviciene, Aida; Dasevicius, Darius; Elie, Nicolas; Plancoulaine, Benoît; Bor, Catherine; Herlin, Paulette
2012-01-01
Pathology has recently entered the era of personalized medicine. This brings new expectations for the accuracy and precision of tissue-based diagnosis, in particular, when quantification of histologic features and biomarker expression is required. While for many years traditional pathologic diagnosis has been regarded as ground truth, this concept is no longer sufficient in contemporary tissue-based biomarker research and clinical use. Another major change in pathology is brought by the advancement of virtual microscopy technology enabling digitization of microscopy slides and presenting new opportunities for digital image analysis. Computerized vision provides an immediate benefit of increased capacity (automation) and precision (reproducibility), but not necessarily the accuracy of the analysis. To achieve the benefit of accuracy, pathologists will have to assume an obligation of validation and quality assurance of the image analysis algorithms. Reference values are needed to measure and control the accuracy. Although pathologists' consensus values are commonly used to validate these tools, we argue that the ground truth can be best achieved by stereology methods, estimating the same variable as an algorithm is intended to do. Proper adoption of the new technology will require a new quantitative mentality in pathology. In order to see a complete and sharp picture of a disease, pathologists will need to learn to use both their analogue and digital eyes.
Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test
Jiang, Chuanjin; Zhang, Jing; Song, Fugen
2014-01-01
Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061
Selecting single model in combination forecasting based on cointegration test and encompassing test.
Jiang, Chuanjin; Zhang, Jing; Song, Fugen
2014-01-01
Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Sampling factors influencing accuracy of sperm kinematic analysis.
Owen, D H; Katz, D F
1993-01-01
Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1982-01-01
The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.
Digital image analysis: improving accuracy and reproducibility of radiographic measurement.
Bould, M; Barnard, S; Learmonth, I D; Cunningham, J L; Hardy, J R
1999-07-01
To assess the accuracy and reproducibility of a digital image analyser and the human eye, in measuring radiographic dimensions. We experimentally compared radiographic measurement using either an image analyser system or the human eye with digital caliper. The assessment of total hip arthroplasty wear from radiographs relies on both the accuracy of radiographic images and the accuracy of radiographic measurement. Radiographs were taken of a slip gauge (30+/-0.00036 mm) and slip gauge with a femoral stem. The projected dimensions of the radiographic images were calculated by trigonometry. The radiographic dimensions were then measured by blinded observers using both techniques. For a single radiograph, the human eye was accurate to 0.26 mm and reproducible to +/-0.1 mm. In comparison the digital image analyser system was accurate to 0.01 mm with a reproducibility of +/-0.08 mm. In an arthroplasty model, where the dimensions of an object were corrected for magnification by the known dimensions of a femoral head, the human eye was accurate to 0.19 mm, whereas the image analyser system was accurate to 0.04 mm. The digital image analysis system is up to 20 times more accurate than the human eye, and in an arthroplasty model the accuracy of measurement increases four-fold. We believe such image analysis may allow more accurate and reproducible measurement of wear from standard follow-up radiographs.
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Accuracy Analysis of a Dam Model from Drone Surveys
Buffi, Giulia; Venturi, Sara
2017-01-01
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185
Accuracy Analysis of a Dam Model from Drone Surveys.
Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio
2017-08-03
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Attentional Mechanisms in Simple Visual Detection: A Speed-Accuracy Trade-Off Analysis
ERIC Educational Resources Information Center
Liu, Charles C.; Wolfgang, Bradley J.; Smith, Philip L.
2009-01-01
Recent spatial cuing studies have shown that detection sensitivity can be increased by the allocation of attention. This increase has been attributed to one of two mechanisms: signal enhancement or uncertainty reduction. Signal enhancement is an increase in the signal-to-noise ratio at the cued location; uncertainty reduction is a reduction in the…
On aerodynamic wake analysis and its relation to total aerodynamic drag in a wind tunnel environment
NASA Astrophysics Data System (ADS)
Guterres, Rui M.
The present work was developed with the goal of advancing the state of the art in the application of three-dimensional wake data analysis to the quantification of aerodynamic drag on a body in a low speed wind tunnel environment. Analysis of the existing tools, their strengths and limitations is presented. Improvements to the existing analysis approaches were made. Software tools were developed to integrate the analysis into a practical tool. A comprehensive derivation of the equations needed for drag computations based on three dimensional separated wake data is developed. A set of complete steps ranging from the basic mathematical concept to the applicable engineering equations is presented. An extensive experimental study was conducted. Three representative body types were studied in varying ground effect conditions. A detailed qualitative wake analysis using wake imaging and two and three dimensional flow visualization was performed. Several significant features of the flow were identified and their relation to the total aerodynamic drag established. A comprehensive wake study of this type is shown to be in itself a powerful tool for the analysis of the wake aerodynamics and its relation to body drag. Quantitative wake analysis techniques were developed. Significant post processing and data conditioning tools and precision analysis were developed. The quality of the data is shown to be in direct correlation with the accuracy of the computed aerodynamic drag. Steps are taken to identify the sources of uncertainty. These are quantified when possible and the accuracy of the computed results is seen to significantly improve. When post processing alone does not resolve issues related to precision and accuracy, solutions are proposed. The improved quantitative wake analysis is applied to the wake data obtained. Guidelines are established that will lead to more successful implementation of these tools in future research programs. Close attention is paid to implementation of issues that are of crucial importance for the accuracy of the results and that are not detailed in the literature. The impact of ground effect on the flows in hand is qualitatively and quantitatively studied. Its impact on the accuracy of the computations as well as the wall drag incompatibility with the theoretical model followed are discussed. The newly developed quantitative analysis provides significantly increased accuracy. The aerodynamic drag coefficient is computed within one percent of balance measured value for the best cases.
Self-instruction: An analysis of the differential effects of instruction and reinforcement
Roberts, Richard N.; Nelson, Rosemery O.; Olson, Terry W.
1987-01-01
This study investigated the impact of training 9 first- and second-grade children to use a full self-instructional regimen, and then differentially reinforced the use of self-instruction only, accuracy only, or both self-instruction and accuracy. Three comparison children received no training in self-instruction and were reinforced for accuracy only. Children improved dramatically in academic accuracy subsequent to self-instructional training, independent of the use of self-instruction and of the specific behavior consequated. Children who were reinforced for using self-instruction did use self-instruction, and those who were not, did not. Comparison group children showed little improvement until training in problem-solving strategies was given after 9 days of reinforcement for accuracy. Self-instructional training is discussed as one type of event that increases the likelihood of accurate performance. Its effectiveness may be explained in terms of a teaching strategy rather than in terms of modifying cognitive processes. PMID:16795700
The ERTS-1 investigation (ER-600). Volume 5: ERTS-1 urban land use analysis
NASA Technical Reports Server (NTRS)
Erb, R. B.
1974-01-01
The Urban Land Use Team conducted a year's investigation of ERTS-1 MSS data to determine the number of Land Use categories in the Houston, Texas, area. They discovered unusually low classification accuracies occurred when a spectrally complex urban scene was classified with extensive rural areas containing spectrally homogeneous features. Separate computer processing of only data in the urbanized area increased classification accuracies of certain urban land use categories. Even so, accuracies of urban landscape were in the 40-70 percent range compared to 70-90 percent for the land use categories containing more homogeneous features (agriculture, forest, water, etc.) in the nonurban areas.
ERIC Educational Resources Information Center
Velasco, Kelly; Zizak, Amanda
This report describes a program for improving word analysis skills in order to increase sight reading, reading accuracy, and fluency. The targeted population consisted of second and third graders in a suburban area close to a large metropolitan city in a Midwestern state. The problems of low word analysis skills were documented through Qualitative…
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Early-Onset Neonatal Sepsis: Still Room for Improvement in Procalcitonin Diagnostic Accuracy Studies
Chiesa, Claudio; Pacifico, Lucia; Osborn, John F.; Bonci, Enea; Hofer, Nora; Resch, Bernhard
2015-01-01
Abstract To perform a systematic review assessing accuracy and completeness of diagnostic studies of procalcitonin (PCT) for early-onset neonatal sepsis (EONS) using the Standards for Reporting of Diagnostic Accuracy (STARD) initiative. EONS, diagnosed during the first 3 days of life, remains a common and serious problem. Increased PCT is a potentially useful diagnostic marker of EONS, but reports in the literature are contradictory. There are several possible explanations for the divergent results including the quality of studies reporting the clinical usefulness of PCT in ruling in or ruling out EONS. We systematically reviewed PubMed, Scopus, and the Cochrane Library databases up to October 1, 2014. Studies were eligible for inclusion in our review if they provided measures of PCT accuracy for diagnosing EONS. A data extraction form based on the STARD checklist and adapted for neonates with EONS was used to appraise the quality of the reporting of included studies. We found 18 articles (1998–2014) fulfilling our eligibility criteria which were included in the final analysis. Overall, the results of our analysis showed that the quality of studies reporting diagnostic accuracy of PCT for EONS was suboptimal leaving ample room for improvement. Information on key elements of design, analysis, and interpretation of test accuracy were frequently missing. Authors should be aware of the STARD criteria before starting a study in this field. We welcome stricter adherence to this guideline. Well-reported studies with appropriate designs will provide more reliable information to guide decisions on the use and interpretations of PCT test results in the management of neonates with EONS. PMID:26222858
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.
Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian
2018-01-19
This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
The influence of delaying judgments of learning on metacognitive accuracy: a meta-analytic review.
Rhodes, Matthew G; Tauber, Sarah K
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the delayed JOL effect. The meta-analyses reported in the current study examined the predominant theoretical accounts as well as potential moderators of the delayed JOL effect. The first meta-analysis examined the relative accuracy of delayed compared with immediate JOLs across 4,554 participants (112 effect sizes) through gamma correlations between JOLs and memory accuracy. Those data showed that delaying JOLs leads to robust benefits to relative accuracy (g = 0.93). The second meta-analysis examined memory performance for delayed compared with immediate JOLs across 3,807 participants (98 effect sizes). Those data showed that delayed JOLs result in a modest but reliable benefit for memory performance relative to immediate JOLs (g = 0.08). Findings from these meta-analyses are well accommodated by theories suggesting that delayed JOL accuracy reflects access to more diagnostic information from long-term memory rather than being a by-product of a retrieval opportunity. However, these data also suggest that theories proposing that the delayed JOL effect results from a memorial benefit or the match between the cues available for JOLs and those available at test may also provide viable explanatory mechanisms necessary for a comprehensive account.
Technical Highlight: NREL Improves Building Energy Simulation Programs Through Diagnostic Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2012-01-09
This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market.
Influence of sex and ethnic tooth-size differences on mixed-dentition space analysis
Altherr, Edward R.; Koroluk, Lorne D.; Phillips, Ceib
2013-01-01
Introduction Most mixed-dentition space analyses were developed by using subjects of northwestern European descent and unspecified sex. The purpose of this study was to determine the predictive accuracy of the Tanaka-Johnston analysis in white and black subjects in North Carolina. Methods A total of 120 subjects (30 males and 30 females in each ethnic group) were recruited from clinics at the University of North Carolina School of Dentistry. Ethnicity was verified to 2 previous generations. All subjects were less than 21 years of age and had a full complement of permanent teeth. Digital calipers were used to measure the mesiodistal widths of all teeth on study models fabricated from alginate impressions. The predicted widths of the canines and the premolars in both arches were compared with the actual measured widths. Results In the maxillary arch, there was a significant interaction of ethnicity and sex on the predictive accuracy of the Tanaka-Johnston analysis (P = .03, factorial ANOVA). The predictive accuracy was significantly overestimated in the white female group (P <.001, least square means). In the mandibular arch, there was no significant interaction between ethnicity and sex (P = .49). Conclusions The Tanaka-Johnston analysis significantly overestimated in females (P <.0001) and underestimated in blacks (P <.0001) (factorial ANOVA). Regression equations were developed to increase the predictive accuracy in both arches. (Am J Orthod Dentofacial Orthop 2007;132:332-9) PMID:17826601
Barbieri, Christopher E; Cha, Eugene K; Chromecki, Thomas F; Dunning, Allison; Lotan, Yair; Svatek, Robert S; Scherr, Douglas S; Karakiewicz, Pierre I; Sun, Maxine; Mazumdar, Madhu; Shariat, Shahrokh F
2012-03-01
• To employ decision curve analysis to determine the impact of nuclear matrix protein 22 (NMP22) on clinical decision making in the detection of bladder cancer using data from a prospective trial. • The study included 1303 patients at risk for bladder cancer who underwent cystoscopy, urine cytology and measurement of urinary NMP22 levels. • We constructed several prediction models to estimate risk of bladder cancer. The base model was generated using patient characteristics (age, gender, race, smoking and haematuria); cytology and NMP22 were added to the base model to determine effects on predictive accuracy. • Clinical net benefit was calculated by summing the benefits and subtracting the harms and weighting these by the threshold probability at which a patient or clinician would opt for cystoscopy. • In all, 72 patients were found to have bladder cancer (5.5%). In univariate analyses, NMP22 was the strongest predictor of bladder cancer presence (predictive accuracy 71.3%), followed by age (67.5%) and cytology (64.3%). • In multivariable prediction models, NMP22 improved the predictive accuracy of the base model by 8.2% (area under the curve 70.2-78.4%) and of the base model plus cytology by 4.2% (area under the curve 75.9-80.1%). • Decision curve analysis revealed that adding NMP22 to other models increased clinical benefit, particularly at higher threshold probabilities. • NMP22 is a strong, independent predictor of bladder cancer. • Addition of NMP22 improves the accuracy of standard predictors by a statistically and clinically significant margin. • Decision curve analysis suggests that integration of NMP22 into clinical decision making helps avoid unnecessary cystoscopies, with minimal increased risk of missing a cancer. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
Areeckal, A S; Jayasheelan, N; Kamath, J; Zawadynski, S; Kocher, M; David S, S
2018-03-01
We propose an automated low cost tool for early diagnosis of onset of osteoporosis using cortical radiogrammetry and cancellous texture analysis from hand and wrist radiographs. The trained classifier model gives a good performance accuracy in classifying between healthy and low bone mass subjects. We propose a low cost automated diagnostic tool for early diagnosis of reduction in bone mass using cortical radiogrammetry and cancellous texture analysis of hand and wrist radiographs. Reduction in bone mass could lead to osteoporosis, a disease observed to be increasingly occurring at a younger age in recent times. Dual X-ray absorptiometry (DXA), currently used in clinical practice, is expensive and available only in urban areas in India. Therefore, there is a need to develop a low cost diagnostic tool in order to facilitate large-scale screening of people for early diagnosis of osteoporosis at primary health centers. Cortical radiogrammetry from third metacarpal bone shaft and cancellous texture analysis from distal radius are used to detect low bone mass. Cortical bone indices and cancellous features using Gray Level Run Length Matrices and Laws' masks are extracted. A neural network classifier is trained using these features to classify healthy subjects and subjects having low bone mass. In our pilot study, the proposed segmentation method shows 89.9 and 93.5% accuracy in detecting third metacarpal bone shaft and distal radius ROI, respectively. The trained classifier shows training accuracy of 94.3% and test accuracy of 88.5%. An automated diagnostic technique for early diagnosis of onset of osteoporosis is developed using cortical radiogrammetric measurements and cancellous texture analysis of hand and wrist radiographs. The work shows that a combination of cortical and cancellous features improves the diagnostic ability and is a promising low cost tool for early diagnosis of increased risk of osteoporosis.
PROACT user's guide: how to use the pallet recovery opportunity analysis computer tool
E. Bradley Hager; A.L. Hammett; Philip A. Araman
2003-01-01
Pallet recovery projects are environmentally responsible and offer promising business opportunities. The Pallet Recovery Opportunity Analysis Computer Tool (PROACT) assesses the operational and financial feasibility of potential pallet recovery projects. The use of project specific information supplied by the user increases the accuracy and the validity of the...
Technique for ranking potential predictor layers for use in remote sensing analysis
Andrew Lister; Mike Hoppus; Rachel Riemann
2004-01-01
Spatial modeling using GIS-based predictor layers often requires that extraneous predictors be culled before conducting analysis. In some cases, using extraneous predictor layers might improve model accuracy but at the expense of increasing complexity and interpretability. In other cases, using extraneous layers can dilute the relationship between predictors and target...
Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.
de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M
2012-04-15
A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.
Laboratory Spectrometer for Wear Metal Analysis of Engine Lubricants.
1986-04-01
analysis, the acid digestion technique for sample pretreatment is the best approach available to date because of its relatively large sample size (1000...microliters or more). However, this technique has two major shortcomings limiting its application: (1) it requires the use of hydrofluoric acid (a...accuracy. Sample preparation including filtration or acid digestion may increase analysis times by 20 minutes or more. b. Repeatability In the analysis
Estimation of Curve Tracing Time in Supercapacitor based PV Characterization
NASA Astrophysics Data System (ADS)
Basu Pal, Sudipta; Das Bhattacharya, Konika; Mukherjee, Dipankar; Paul, Debkalyan
2017-08-01
Smooth and noise-free characterisation of photovoltaic (PV) generators have been revisited with renewed interest in view of large size PV arrays making inroads into the urban sector of major developing countries. Such practice has recently been observed to be confronted by the use of a suitable data acquisition system and also the lack of a supporting theoretical analysis to justify the accuracy of curve tracing. However, the use of a selected bank of supercapacitors can mitigate the said problems to a large extent. Assuming a piecewise linear analysis of the V-I characteristics of a PV generator, an accurate analysis of curve plotting time has been possible. The analysis has been extended to consider the effect of equivalent series resistance of the supercapacitor leading to increased accuracy (90-95%) of curve plotting times.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M
2014-12-01
The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.
Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.
2014-01-01
The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
Phi, Xuan-Anh; Houssami, Nehmat; Hooning, Maartje J; Riedl, Christopher C; Leach, Martin O; Sardanelli, Francesco; Warner, Ellen; Trop, Isabelle; Saadatmand, Sepideh; Tilanus-Linthorst, Madeleine M A; Helbich, Thomas H; van den Heuvel, Edwin R; de Koning, Harry J; Obdeijn, Inge-Marie; de Bock, Geertruida H
2017-11-01
Women with a strong family history of breast cancer (BC) and without a known gene mutation have an increased risk of developing BC. We aimed to investigate the accuracy of screening using annual mammography with or without magnetic resonance imaging (MRI) for these women outside the general population screening program. An individual patient data (IPD) meta-analysis was conducted using IPD from six prospective screening trials that had included women at increased risk for BC: only women with a strong familial risk for BC and without a known gene mutation were included in this analysis. A generalised linear mixed model was applied to estimate and compare screening accuracy (sensitivity, specificity and predictive values) for annual mammography with or without MRI. There were 2226 women (median age: 41 years, interquartile range 35-47) with 7478 woman-years of follow-up, with a BC rate of 12 (95% confidence interval 9.3-14) in 1000 woman-years. Mammography screening had a sensitivity of 55% (standard error of mean [SE] 7.0) and a specificity of 94% (SE 1.3). Screening with MRI alone had a sensitivity of 89% (SE 4.6) and a specificity of 83% (SE 2.8). Adding MRI to mammography increased sensitivity to 98% (SE 1.8, P < 0.01 compared to mammography alone) but lowered specificity to 79% (SE 2.7, P < 0.01 compared with mammography alone). In this population of women with strong familial BC risk but without a known gene mutation, in whom BC incidence was high both before and after age 50, adding MRI to mammography substantially increased screening sensitivity but also decreased its specificity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extending the accuracy of the SNAP interatomic potential form
NASA Astrophysics Data System (ADS)
Wood, Mitchell A.; Thompson, Aidan P.
2018-06-01
The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.
Koch, Stefan P.; Hägele, Claudia; Haynes, John-Dylan; Heinz, Andreas; Schlagenhauf, Florian; Sterzer, Philipp
2015-01-01
Functional neuroimaging has provided evidence for altered function of mesolimbic circuits implicated in reward processing, first and foremost the ventral striatum, in patients with schizophrenia. While such findings based on significant group differences in brain activations can provide important insights into the pathomechanisms of mental disorders, the use of neuroimaging results from standard univariate statistical analysis for individual diagnosis has proven difficult. In this proof of concept study, we tested whether the predictive accuracy for the diagnostic classification of schizophrenia patients vs. healthy controls could be improved using multivariate pattern analysis (MVPA) of regional functional magnetic resonance imaging (fMRI) activation patterns for the anticipation of monetary reward. With a searchlight MVPA approach using support vector machine classification, we found that the diagnostic category could be predicted from local activation patterns in frontal, temporal, occipital and midbrain regions, with a maximal cluster peak classification accuracy of 93% for the right pallidum. Region-of-interest based MVPA for the ventral striatum achieved a maximal cluster peak accuracy of 88%, whereas the classification accuracy on the basis of standard univariate analysis reached only 75%. Moreover, using support vector regression we could additionally predict the severity of negative symptoms from ventral striatal activation patterns. These results show that MVPA can be used to substantially increase the accuracy of diagnostic classification on the basis of task-related fMRI signal patterns in a regionally specific way. PMID:25799236
Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes
Ding, Quan; Besio, Walter G.
2015-01-01
Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200
Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes.
Makeyev, Oleksandr; Ding, Quan; Besio, Walter G
2016-02-01
Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an ( n + 1)-polar electrode with n rings using the (4 n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2 n . An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n .
Krahenbuhl, Jason T; Cho, Seok-Hwan; Irelan, Jon; Bansal, Naveen K
2016-08-01
Little peer-reviewed information is available regarding the accuracy and precision of the occlusal contact reproduction of digitally mounted stereolithographic casts. The purpose of this in vitro study was to evaluate the accuracy and precision of occlusal contacts among stereolithographic casts mounted by digital occlusal registrations. Four complete anatomic dentoforms were arbitrarily mounted on a semi-adjustable articulator in maximal intercuspal position and served as the 4 different simulated patients (SP). A total of 60 digital impressions and digital interocclusal registrations were made with a digital intraoral scanner to fabricate 15 sets of mounted stereolithographic (SLA) definitive casts for each dentoform. After receiving a total of 60 SLA casts, polyvinyl siloxane (PVS) interocclusal records were made for each set. The occlusal contacts for each set of SLA casts were measured by recording the amount of light transmitted through the interocclusal records. To evaluate the accuracy between the SP and their respective SLA casts, the areas of actual contact (AC) and near contact (NC) were calculated. For precision analysis, the coefficient of variation (CoV) was used. The data was analyzed with t tests for accuracy and the McKay and Vangel test for precision (α=.05). The accuracy analysis showed a statistically significant difference between the SP and the SLA cast of each dentoform (P<.05). For the AC in all dentoforms, a significant increase was found in the areas of actual contact of SLA casts compared with the contacts present in the SP (P<.05). Conversely, for the NC in all dentoforms, a significant decrease was found in the occlusal contact areas of the SLA casts compared with the contacts in the SP (P<.05). The precision analysis demonstrated the different CoV values between AC (5.8 to 8.8%) and NC (21.4 to 44.6%) of digitally mounted SLA casts, indicating that the overall precision of the SLA cast was low. For the accuracy evaluation, statistically significant differences were found between the occlusal contacts of all digitally mounted SLA casts groups, with an increase in AC values and a decrease in NC values. For the precision assessment, the CoV values of the AC and NC showed the digitally articulated cast's inability to reproduce the uniform occlusal contacts. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gutierrez-Velez, V. H.; DeFries, R. S.
2011-12-01
Oil palm expansion has led to clearing of extensive forest areas in the tropics. However quantitative assessments of the magnitude of oil palm expansion to deforestation have been challenging due in large part to the limitations presented by conventional optical data sets for discriminating plantations from forests and other tree cover vegetations. Recently available information from active remote sensors has opened the possibility of using these data sources to overcome these limitations. The purpose of this analysis is to evaluate the accuracy of oil palm classification when using ALOS/PALSAR active satellite data in conjunction with Landsat information, compared to the use of Landsat data only. The analysis takes place in a focused region around the city of Pucallpa in the Ucayali province of the Peruvian Amazon for the year 2010. Oil palm plantations were separated in five categories consisting of four age classes (0-3, 3-5, 5-10 and > 10 yrs) and an additional class accounting for degraded plantations older than 15 yr. Other land covers were water bodies, unvegetated land, short and tall grass, fallow, secondary vegetation, and forest. Classifications were performed using random forests. Training points for calibration and validation consisted of 411 polygons measured in areas representative of the land covers of interest and totaled 6,367 ha. Overall classification accuracy increased from 89.9% using only Landsat data sets to 94.3% using both Landast and ALOS/PALSAR. Both user's and producer's accuracy increased in all classes when using both data sets except for producer's accuracy in short grass which decreased by 1%. The largest increase in user's accuracy was obtained in oil palm plantations older than 10 years from 62 to 80% while producer's accuracy improved the most in plantations in age class 3-5 from 63 to 80%. Results demonstrate the suitability of data from ALOS/PALSAR and other active remote sensors to improve classification of oil palm plantations in age classes and discriminate them from other land covers. Results suggest a potential for improving discrimination of other tree cover types using a combination of active and conventional optical remote sensors.
Vibration modes interference in the MEMS resonant pressure sensor
NASA Astrophysics Data System (ADS)
Zhang, Fangfang; Li, Anlin; Bu, Zhenxiang; Wang, Lingyun; Sun, Daoheng; Du, Xiaohui; Gu, Dandan
2017-11-01
A new type of coupled balanced-mass double-ended tuning fork resonator (CBDETF) pressure sensor is fabricated and tested. However, the low accuracy of the CBDETF pressure sensor is not satisfied to us. Based on systematic analysis and tests, the coupling effect between the operational mode and interference mode is considered to be the main cause for the sensor in accuracy. To solve this problem, the stiffness of the serpentine beams is increased to pull up the resonant frequency of the interfering mode and make it separate far from the operational mode. Finally, the accuracy of the CBDETF pressure sensor is improved from + /-0.5% to less than + /-0.03% of the Full Scale (F.S.).
Portable Electronic Nose Based on Electrochemical Sensors for Food Quality Assessment
Dymerski, Tomasz; Gębicki, Jacek; Namieśnik, Jacek
2017-01-01
The steady increase in global consumption puts a strain on agriculture and might lead to a decrease in food quality. Currently used techniques of food analysis are often labour-intensive and time-consuming and require extensive sample preparation. For that reason, there is a demand for novel methods that could be used for rapid food quality assessment. A technique based on the use of an array of chemical sensors for holistic analysis of the sample’s headspace is called electronic olfaction. In this article, a prototype of a portable, modular electronic nose intended for food analysis is described. Using the SVM method, it was possible to classify samples of poultry meat based on shelf-life with 100% accuracy, and also samples of rapeseed oil based on the degree of thermal degradation with 100% accuracy. The prototype was also used to detect adulterations of extra virgin olive oil with rapeseed oil with 82% overall accuracy. Due to the modular design, the prototype offers the advantages of solutions targeted for analysis of specific food products, at the same time retaining the flexibility of application. Furthermore, its portability allows the device to be used at different stages of the production and distribution process. PMID:29186754
NASA Astrophysics Data System (ADS)
Wu, Binlin; Smith, Jason; Zhang, Lin; Gao, Xin; Alfano, Robert R.
2018-02-01
Worldwide breast cancer incidence has increased by more than twenty percent in the past decade. It is also known that in that time, mortality due to the affliction has increased by fourteen percent. Using optical-based diagnostic techniques, such as Raman spectroscopy, has been explored in order to increase diagnostic accuracy in a more objective way along with significantly decreasing diagnostic wait-times. In this study, Raman spectroscopy with 532-nm excitation was used in order to incite resonance effects to enhance Stokes Raman scattering from unique biomolecular vibrational modes. Seventy-two Raman spectra (41 cancerous, 31 normal) were collected from nine breast tissue samples by performing a ten-spectra average using a 500-ms acquisition time at each acquisition location. The raw spectral data was subsequently prepared for analysis with background correction and normalization. The spectral data in the Raman Shift range of 750- 2000 cm-1 was used for analysis since the detector has highest sensitivity around in this range. The matrix decomposition technique nonnegative matrix factorization (NMF) was then performed on this processed data. The resulting leave-oneout cross-validation using two selective feature components resulted in sensitivity, specificity and accuracy of 92.6%, 100% and 96.0% respectively. The performance of NMF was also compared to that using principal component analysis (PCA), and NMF was shown be to be superior to PCA in this study. This study shows that coupling the resonance Raman spectroscopy technique with subsequent NMF decomposition method shows potential for high characterization accuracy in breast cancer detection.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
Joshi, Vinayak; Agurto, Carla; VanNess, Richard; Nemeth, Sheila; Soliz, Peter; Barriga, Simon
2014-01-01
One of the most important signs of systemic disease that presents on the retina is vascular abnormalities such as in hypertensive retinopathy. Manual analysis of fundus images by human readers is qualitative and lacks in accuracy, consistency and repeatability. Present semi-automatic methods for vascular evaluation are reported to increase accuracy and reduce reader variability, but require extensive reader interaction; thus limiting the software-aided efficiency. Automation thus holds a twofold promise. First, decrease variability while increasing accuracy, and second, increasing the efficiency. In this paper we propose fully automated software as a second reader system for comprehensive assessment of retinal vasculature; which aids the readers in the quantitative characterization of vessel abnormalities in fundus images. This system provides the reader with objective measures of vascular morphology such as tortuosity, branching angles, as well as highlights of areas with abnormalities such as artery-venous nicking, copper and silver wiring, and retinal emboli; in order for the reader to make a final screening decision. To test the efficacy of our system, we evaluated the change in performance of a newly certified retinal reader when grading a set of 40 color fundus images with and without the assistance of the software. The results demonstrated an improvement in reader's performance with the software assistance, in terms of accuracy of detection of vessel abnormalities, determination of retinopathy, and reading time. This system enables the reader in making computer-assisted vasculature assessment with high accuracy and consistency, at a reduced reading time.
a Region-Based Multi-Scale Approach for Object-Based Image Analysis
NASA Astrophysics Data System (ADS)
Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.
2016-06-01
Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.
Single-Step BLUP with Varying Genotyping Effort in Open-Pollinated Picea glauca.
Ratcliffe, Blaise; El-Dien, Omnia Gamal; Cappa, Eduardo P; Porth, Ilga; Klápště, Jaroslav; Chen, Charles; El-Kassaby, Yousry A
2017-03-10
Maximization of genetic gain in forest tree breeding programs is contingent on the accuracy of the predicted breeding values and precision of the estimated genetic parameters. We investigated the effect of the combined use of contemporary pedigree information and genomic relatedness estimates on the accuracy of predicted breeding values and precision of estimated genetic parameters, as well as rankings of selection candidates, using single-step genomic evaluation (HBLUP). In this study, two traits with diverse heritabilities [tree height (HT) and wood density (WD)] were assessed at various levels of family genotyping efforts (0, 25, 50, 75, and 100%) from a population of white spruce ( Picea glauca ) consisting of 1694 trees from 214 open-pollinated families, representing 43 provenances in Québec, Canada. The results revealed that HBLUP bivariate analysis is effective in reducing the known bias in heritability estimates of open-pollinated populations, as it exposes hidden relatedness, potential pedigree errors, and inbreeding. The addition of genomic information in the analysis considerably improved the accuracy in breeding value estimates by accounting for both Mendelian sampling and historical coancestry that were not captured by the contemporary pedigree alone. Increasing family genotyping efforts were associated with continuous improvement in model fit, precision of genetic parameters, and breeding value accuracy. Yet, improvements were observed even at minimal genotyping effort, indicating that even modest genotyping effort is effective in improving genetic evaluation. The combined utilization of both pedigree and genomic information may be a cost-effective approach to increase the accuracy of breeding values in forest tree breeding programs where shallow pedigrees and large testing populations are the norm. Copyright © 2017 Ratcliffe et al.
Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT‐HD
Long, Jeffrey D.
2015-01-01
Abstract Background It is well known in Huntington's disease that cytosine‐adenine‐guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. Methods One thousand seventy‐eight Huntington's disease gene–expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right‐censored data. Results Adding 34 variables along with cytosine‐adenine‐guanine and age substantially increased predictive accuracy relative to cytosine‐adenine‐guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5‐y predictive accuracy than when using cytosine‐adenine‐guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Conclusions Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine‐adenine‐guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society PMID:26340420
Multivariate prediction of motor diagnosis in Huntington's disease: 12 years of PREDICT-HD.
Long, Jeffrey D; Paulsen, Jane S
2015-10-01
It is well known in Huntington's disease that cytosine-adenine-guanine expansion and age at study entry are predictive of the timing of motor diagnosis. The goal of this study was to assess whether additional motor, imaging, cognitive, functional, psychiatric, and demographic variables measured at study entry increased the ability to predict the risk of motor diagnosis over 12 years. One thousand seventy-eight Huntington's disease gene-expanded carriers (64% female) from the Neurobiological Predictors of Huntington's Disease study were followed up for up to 12 y (mean = 5, standard deviation = 3.3) covering 2002 to 2014. No one had a motor diagnosis at study entry, but 225 (21%) carriers prospectively received a motor diagnosis. Analysis was performed with random survival forests, which is a machine learning method for right-censored data. Adding 34 variables along with cytosine-adenine-guanine and age substantially increased predictive accuracy relative to cytosine-adenine-guanine and age alone. Adding six of the common motor and cognitive variables (total motor score, diagnostic confidence level, Symbol Digit Modalities Test, three Stroop tests) resulted in lower predictive accuracy than the full set, but still had twice the 5-y predictive accuracy than when using cytosine-adenine-guanine and age alone. Additional analysis suggested interactions and nonlinear effects that were characterized in a post hoc Cox regression model. Measurement of clinical variables can substantially increase the accuracy of predicting motor diagnosis over and above cytosine-adenine-guanine and age (and their interaction). Estimated probabilities can be used to characterize progression level and aid in future studies' sample selection. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society.
Pathak, Neha; Dodds, Julie; Khan, Khalid
2014-01-01
Objective To determine the accuracy of testing for human papillomavirus (HPV) DNA in urine in detecting cervical HPV in sexually active women. Design Systematic review and meta-analysis. Data sources Searches of electronic databases from inception until December 2013, checks of reference lists, manual searches of recent issues of relevant journals, and contact with experts. Eligibility criteria Test accuracy studies in sexually active women that compared detection of urine HPV DNA with detection of cervical HPV DNA. Data extraction and synthesis Data relating to patient characteristics, study context, risk of bias, and test accuracy. 2×2 tables were constructed and synthesised by bivariate mixed effects meta-analysis. Results 16 articles reporting on 14 studies (1443 women) were eligible for meta-analysis. Most used commercial polymerase chain reaction methods on first void urine samples. Urine detection of any HPV had a pooled sensitivity of 87% (95% confidence interval 78% to 92%) and specificity of 94% (95% confidence interval 82% to 98%). Urine detection of high risk HPV had a pooled sensitivity of 77% (68% to 84%) and specificity of 88% (58% to 97%). Urine detection of HPV 16 and 18 had a pooled sensitivity of 73% (56% to 86%) and specificity of 98% (91% to 100%). Metaregression revealed an increase in sensitivity when urine samples were collected as first void compared with random or midstream (P=0.004). Limitations The major limitations of this review are the lack of a strictly uniform method for the detection of HPV in urine and the variation in accuracy between individual studies. Conclusions Testing urine for HPV seems to have good accuracy for the detection of cervical HPV, and testing first void urine samples is more accurate than random or midstream sampling. When cervical HPV detection is considered difficult in particular subgroups, urine testing should be regarded as an acceptable alternative. PMID:25232064
NASA Astrophysics Data System (ADS)
Zakharov, V. P.; Bratchenko, I. A.; Artemyev, D. N.; Myakinin, O. O.; Khristoforova, Y. A.; Kozlov, S. V.; Moryatov, A. A.
2015-07-01
The combined application of Raman and autofluorescence spectroscopy in visible and near infrared regions for the analysis of malignant neoplasms of human skin was demonstrated. Ex vivo experiments were performed for 130 skin tissue samples: 28 malignant melanomas, 19 basal cell carcinomas, 15 benign tumors, 9 nevi and 59 normal tissues. Proposed method of Raman spectra analysis allows for malignant melanoma differentiating from other skin tissues with accuracy of 84% (sensitivity of 97%, specificity of 72%). Autofluorescence analysis in near infrared and visible regions helped us to increase the diagnostic accuracy by 5-10%. Registration of autofluorescence in near infrared region is realized in one optical unit with Raman spectroscopy. Thus, the proposed method of combined skin tissues study makes possible simultaneous large skin area study with autofluorescence spectra analysis and precise neoplasm type determination with Raman spectroscopy.
NASA Astrophysics Data System (ADS)
Prince, John R.
1982-12-01
Sensitivity, specificity, and predictive accuracy have been shown to be useful measures of the clinical efficacy of diagnostic tests and can be used to predict the potential improvement in diagnostic certitude resulting from the introduction of a competing technology. This communication demonstrates how the informal use of clinical decision analysis may guide health planners in the allocation of resources, purchasing decisions, and implementation of high technology. For didactic purposes the focus is on a comparison between conventional planar radioscintigraphy (RS) and single photon transverse section emission conputed tomography (SPECT). For example, positive predictive accuracy (PPA) for brain RS in a specialist hospital with a 50% disease prevalance is about 95%. SPECT should increase this predicted accuracy to 96%. In a primary care hospital with only a 15% disease prevalance the PPA is only 77% and SPECT may increase this accuracy to about 79%. Similar calculations based on published data show that marginal improvements are expected with SPECT in the liver. It is concluded that: a) The decision to purchase a high technology imaging modality such as SPECT for clinical purposes should be analyzed on an individual organ system and institutional basis. High technology may be justified in specialist hospitals but not necessarily in primary care hospitals. This is more dependent on disease prevalance than procedure volume; b) It is questionable whether SPECT imaging will be competitive with standard RS procedures. Research should concentrate on the development of different medical applications.
Weissberger, Gali H.; Strong, Jessica V.; Stefanidis, Kayla B.; Summers, Mathew J.; Bondi, Mark W.; Stricker, Nikki H.
2018-01-01
With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer’s dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer’s disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers. PMID:28940127
van Wijck, Kim; Bessems, Babs Afm; van Eijk, Hans Mh; Buurman, Wim A; Dejong, Cornelis Hc; Lenaerts, Kaatje
2012-01-01
Increased intestinal permeability is an important measure of disease activity and prognosis. Currently, many permeability tests are available and no consensus has been reached as to which test is most suitable. The aim of this study was to compare urinary probe excretion and accuracy of a polyethylene glycol (PEG) assay and dual sugar assay in a double-blinded crossover study to evaluate probe excretion and the accuracy of both tests. Gastrointestinal permeability was measured in nine volunteers using PEG 400, PEG 1500, and PEG 3350 or lactulose-rhamnose. On 4 separate days, permeability was analyzed after oral intake of placebo or indomethacin, a drug known to increase intestinal permeability. Plasma intestinal fatty acid binding protein and calprotectin levels were determined to verify compromised intestinal integrity after indomethacin consumption. Urinary samples were collected at baseline, hourly up to 5 hours after probe intake, and between 5 and 24 hours. Urinary excretion of PEG and sugars was determined using high-pressure liquid chromatography-evaporative light scattering detection and liquid chromatography-mass spectrometry, respectively. Intake of indomethacin increased plasma intestinal fatty acid-binding protein and calprotectin levels, reflecting loss of intestinal integrity and inflammation. In this state of indomethacin-induced gastrointestinal compromise, urinary excretion of the three PEG probes and lactulose increased compared with placebo. Urinary PEG 400 excretion, the PEG 3350/PEG 400 ratio, and the lactulose/rhamnose ratio could accurately detect indomethacin-induced increases in gastrointestinal permeability, especially within 2 hours of probe intake. Hourly urinary excretion and diagnostic accuracy of PEG and sugar probes show high concordance for detection of indomethacin-induced increases in gastrointestinal permeability. This comparative study improves our knowledge of permeability analysis in man by providing a clear overview of both tests and demonstrates equivalent performance in the current setting.
Self-audit of lockout/tagout in manufacturing workplaces: A pilot study.
Yamin, Samuel C; Parker, David L; Xi, Min; Stanley, Rodney
2017-05-01
Occupational health and safety (OHS) self-auditing is a common practice in industrial workplaces. However, few audit instruments have been tested for inter-rater reliability and accuracy. A lockout/tagout (LOTO) self-audit checklist was developed for use in manufacturing enterprises. It was tested for inter-rater reliability and accuracy using responses of business self-auditors and external auditors. Inter-rater reliability at ten businesses was excellent (κ = 0.84). Business self-auditors had high (100%) accuracy in identifying elements of LOTO practice that were present as well those that were absent (81% accuracy). Reliability and accuracy increased further when problematic checklist questions were removed from the analysis. Results indicate that the LOTO self-audit checklist would be useful in manufacturing firms' efforts to assess and improve their LOTO programs. In addition, a reliable self-audit instrument removes the need for external auditors to visit worksites, thereby expanding capacity for outreach and intervention while minimizing costs. © 2017 Wiley Periodicals, Inc.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
Multi-task Gaussian process for imputing missing data in multi-trait and multi-environment trials.
Hori, Tomoaki; Montcho, David; Agbangla, Clement; Ebana, Kaworu; Futakuchi, Koichi; Iwata, Hiroyoshi
2016-11-01
A method based on a multi-task Gaussian process using self-measuring similarity gave increased accuracy for imputing missing phenotypic data in multi-trait and multi-environment trials. Multi-environmental trial (MET) data often encounter the problem of missing data. Accurate imputation of missing data makes subsequent analysis more effective and the results easier to understand. Moreover, accurate imputation may help to reduce the cost of phenotyping for thinned-out lines tested in METs. METs are generally performed for multiple traits that are correlated to each other. Correlation among traits can be useful information for imputation, but single-trait-based methods cannot utilize information shared by traits that are correlated. In this paper, we propose imputation methods based on a multi-task Gaussian process (MTGP) using self-measuring similarity kernels reflecting relationships among traits, genotypes, and environments. This framework allows us to use genetic correlation among multi-trait multi-environment data and also to combine MET data and marker genotype data. We compared the accuracy of three MTGP methods and iterative regularized PCA using rice MET data. Two scenarios for the generation of missing data at various missing rates were considered. The MTGP performed a better imputation accuracy than regularized PCA, especially at high missing rates. Under the 'uniform' scenario, in which missing data arise randomly, inclusion of marker genotype data in the imputation increased the imputation accuracy at high missing rates. Under the 'fiber' scenario, in which missing data arise in all traits for some combinations between genotypes and environments, the inclusion of marker genotype data decreased the imputation accuracy for most traits while increasing the accuracy in a few traits remarkably. The proposed methods will be useful for solving the missing data problem in MET data.
Fan, Shu-Han; Chou, Chia-Ching; Chen, Wei-Chen; Fang, Wai-Chi
2015-01-01
In this study, an effective real-time obstructive sleep apnea (OSA) detection method from frequency analysis of ECG-derived respiratory (EDR) and heart rate variability (HRV) is proposed. Compared to traditional Polysomnography (PSG) which needs several physiological signals measured from patients, the proposed OSA detection method just only use ECG signals to determine the time interval of OSA. In order to be feasible to be implemented in hardware to achieve the real-time detection and portable application, the simplified Lomb Periodogram is utilized to perform the frequency analysis of EDR and HRV in this study. The experimental results of this work indicate that the overall accuracy can be effectively increased with values of Specificity (Sp) of 91%, Sensitivity (Se) of 95.7%, and Accuracy of 93.2% by integrating the EDR and HRV indexes.
A low-cost sensor for high density urban CO2 monitoring
NASA Astrophysics Data System (ADS)
Zeng, N.; Martin, C.
2015-12-01
The high spatial-termporal variability of greenhouse gases and other pollution sources in an urban environment can not be easily resolved with current high-accuracy but expensive instruments. We have tested a small, low-cost NDIR CO2 sensor designed for potential use. It has a manufacturer's specified accuracy of +- 30 parts per million (ppm). However, initial results running parallel with a research-grade greenhouse gas analyzer have shown that the absolute accuracy of the sensor is within +-5ppm, suggesting their utility for sensing ambient air variations in carbon dioxide. Through a multivariate analysis, we have determined a correction procedure that when accounting for environmental temperature, humidity, air pressure, and the device's span and offset, we can further increase the accuracy of the collected data. We will show results from rooftop measurements over a period of one year and CO2 tracking data in the Washington-Baltimore Metropolitan area.
ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores
ERIC Educational Resources Information Center
Allalouf, Avi
2014-01-01
The Quality Control (QC) Guidelines are intended to increase the efficiency, precision, and accuracy of the scoring, analysis, and reporting process of testing. The QC Guidelines focus on large-scale testing operations where multiple forms of tests are created for use on set dates. However, they may also be used for a wide variety of other testing…
Peer Assessment in the Digital Age: A Meta-Analysis Comparing Peer and Teacher Ratings
ERIC Educational Resources Information Center
Li, Hongli; Xiong, Yao; Zang, Xiaojiao; Kornhaber, Mindy L.; Lyu, Youngsun; Chung, Kyung Sun; Suen, Hoi K.
2016-01-01
Given the wide use of peer assessment, especially in higher education, the relative accuracy of peer ratings compared to teacher ratings is a major concern for both educators and researchers. This concern has grown with the increase of peer assessment in digital platforms. In this meta-analysis, using a variance-known hierarchical linear modelling…
Simulation of springback and microstructural analysis of dual phase steels
NASA Astrophysics Data System (ADS)
Kalyan, T. Sri.; Wei, Xing; Mendiguren, Joseba; Rolfe, Bernard
2013-12-01
With increasing demand for weight reduction and better crashworthiness abilities in car development, advanced high strength Dual Phase (DP) steels have been progressively used when making automotive parts. The higher strength steels exhibit higher springback and lower dimensional accuracy after stamping. This has necessitated the use of simulation of each stamped component prior to production to estimate the part's dimensional accuracy. Understanding the micro-mechanical behaviour of AHSS sheet may provide more accuracy to stamping simulations. This work can be divided basically into two parts: first modelling a standard channel forming process; second modelling the micro-structure of the process. The standard top hat channel forming process, benchmark NUMISHEET'93, is used for investigating springback effect of WISCO Dual Phase steels. The second part of this work includes the finite element analysis of microstructures to understand the behaviour of the multi-phase steel at a more fundamental level. The outcomes of this work will help in the dimensional control of steels during manufacturing stage based on the material's microstructure.
Asiimwe, Stephen; Oloya, James; Song, Xiao; Whalen, Christopher C
2014-12-01
Unsupervised HIV self-testing (HST) has potential to increase knowledge of HIV status; however, its accuracy is unknown. To estimate the accuracy of unsupervised HST in field settings in Uganda, we performed a non-blinded, randomized controlled, non-inferiority trial of unsupervised compared with supervised HST among selected high HIV risk fisherfolk (22.1 % HIV Prevalence) in three fishing villages in Uganda between July and September 2013. The study enrolled 246 participants and randomized them in a 1:1 ratio to unsupervised HST or provider-supervised HST. In an intent-to-treat analysis, the HST sensitivity was 90 % in the unsupervised arm and 100 % among the provider-supervised, yielding a difference 0f -10 % (90 % CI -21, 1 %); non-inferiority was not shown. In a per protocol analysis, the difference in sensitivity was -5.6 % (90 % CI -14.4, 3.3 %) and did show non-inferiority. We conclude that unsupervised HST is feasible in rural Africa and may be non-inferior to provider-supervised HST.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
Blaya, Joaquin A; Shin, Sonya S; Yagui, Martin J A; Yale, Gloria; Suarez, Carmen; Asencios, Luis; Fraser, Hamish
2007-10-11
We created a web-based laboratory information system, e-Chasqui to connect public laboratories to health centers to improve communication and analysis. After one year, we performed a pre and post assessment of communication delays and found that e-Chasqui maintained the average delay but eliminated delays of over 60 days. Adding digital verification maintained the average delay, but should increase accuracy. We are currently performing a randomized evaluation of the impacts of e-Chasqui.
NASA Astrophysics Data System (ADS)
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.
Knowledge Mapping: A Multipurpose Task Analysis Tool.
ERIC Educational Resources Information Center
Esque, Timm J.
1988-01-01
Describes knowledge mapping, a tool developed to increase the objectivity and accuracy of task difficulty ratings for job design. Application in a semiconductor manufacturing environment is discussed, including identifying prerequisite knowledge for a given task; establishing training development priorities; defining knowledge levels; identifying…
Sex estimation by femur in modern Thai population.
Monum, T; Prasitwattanseree, S; Das, S; Siriphimolwat, P; Mahakkanukrauh, P
2017-01-01
Sex estimation is an important step of postmortem investigation and the femur is a useful bone for sex estimation by using metric analysis method. Even though there have been a reported sex estimation method by using femur in Thais, the temporal change related to time and anthropological data need to be renewed. Thus the aim of this study is to re-evaluate sex estimation by femur in Thais. 97 adult male and 103 female femora were random chosen from Forensic osteology research center and 6 measurements were applied tend to. To compare with previous Thai data, mid shaft diameter to increase but femoral head and epicondylar breadth to stabilize and when tested previous discriminant function by vertical head diameter and epicondalar breadth, the accuracy of prediction was lower than previous report. From the new data, epicondalar breadth is the best variable for distinguishing male and female at 88.7 percent of accuracy, following by transverse and vertical head diameter at 86.7 percent and femoral neck diameter at 81.7 percent of accuracy. Multivariate discriminant analysis indicated transverse head diameter and epicondylar breadth performed highest rate of accuracy at 89.7 percent. The percent of accuracy of femur was close to previous reported sex estimation by talus and calcaneus in Thai population. Thus, for especially in case of lower limb remain, which absence of pelvis.
Stack Number Influence on the Accuracy of Aster Gdem (V2)
NASA Astrophysics Data System (ADS)
Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.
2017-09-01
In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.
Lawryk, Nicholas J; Feng, H Amy; Chen, Bean T
2009-07-01
Recent advances in field-portable X-ray fluorescence (FP XRF) spectrometer technology have made it a potentially valuable screening tool for the industrial hygienist to estimate worker exposures to airborne metals. Although recent studies have shown that FP XRF technology may be better suited for qualitative or semiquantitative analysis of airborne lead in the workplace, these studies have not extensively addressed its ability to measure other elements. This study involved a laboratory-based evaluation of a representative model FP XRF spectrometer to measure elements commonly encountered in workplace settings that may be collected on air sample filter media, including chromium, copper, iron, manganese, nickel, lead, and zinc. The evaluation included assessments of (1) response intensity with respect to location on the probe window, (2) limits of detection for five different filter media, (3) limits of detection as a function of analysis time, and (4) bias, precision, and accuracy estimates. Teflon, polyvinyl chloride, polypropylene, and mixed cellulose ester filter media all had similarly low limits of detection for the set of elements examined. Limits of detection, bias, and precision generally improved with increasing analysis time. Bias, precision, and accuracy estimates generally improved with increasing element concentration. Accuracy estimates met the National Institute for Occupational Safety and Health criterion for nearly all the element and concentration combinations. Based on these results, FP XRF spectrometry shows potential to be useful in the assessment of worker inhalation exposures to other metals in addition to lead.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Simulating the effect of non-linear mode coupling in cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Kiessling, A.; Taylor, A. N.; Heavens, A. F.
2011-09-01
Fisher Information Matrix methods are commonly used in cosmology to estimate the accuracy that cosmological parameters can be measured with a given experiment and to optimize the design of experiments. However, the standard approach usually assumes both data and parameter estimates are Gaussian-distributed. Further, for survey forecasts and optimization it is usually assumed that the power-spectrum covariance matrix is diagonal in Fourier space. However, in the low-redshift Universe, non-linear mode coupling will tend to correlate small-scale power, moving information from lower to higher order moments of the field. This movement of information will change the predictions of cosmological parameter accuracy. In this paper we quantify this loss of information by comparing naïve Gaussian Fisher matrix forecasts with a maximum likelihood parameter estimation analysis of a suite of mock weak lensing catalogues derived from N-body simulations, based on the SUNGLASS pipeline, for a 2D and tomographic shear analysis of a Euclid-like survey. In both cases, we find that the 68 per cent confidence area of the Ωm-σ8 plane increases by a factor of 5. However, the marginal errors increase by just 20-40 per cent. We propose a new method to model the effects of non-linear shear-power mode coupling in the Fisher matrix by approximating the shear-power distribution as a multivariate Gaussian with a covariance matrix derived from the mock weak lensing survey. We find that this approximation can reproduce the 68 per cent confidence regions of the full maximum likelihood analysis in the Ωm-σ8 plane to high accuracy for both 2D and tomographic weak lensing surveys. Finally, we perform a multiparameter analysis of Ωm, σ8, h, ns, w0 and wa to compare the Gaussian and non-linear mode-coupled Fisher matrix contours. The 6D volume of the 1σ error contours for the non-linear Fisher analysis is a factor of 3 larger than for the Gaussian case, and the shape of the 68 per cent confidence volume is modified. We propose that future Fisher matrix estimates of cosmological parameter accuracies should include mode-coupling effects.
Systematic review of discharge coding accuracy
Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.
2012-01-01
Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302
How to select electrical end-use meters for proper measurement of DSM impact estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, M.
1994-12-31
Does metering actually provide higher accuracy impact estimates? The answer is sometimes yes, sometimes no. It depends on how the metered data will be used. DSM impact estimates can be achieved in a variety of ways, including engineering algorithms, modeling and statistical methods. Yet for all of these methods, impacts can be calculated as the difference in pre- and post-installation annual load shapes. Increasingly, end-use metering is being used to either adjust and calibrate a particular estimate method, or measure load shapes directly. It is therefore not surprising that metering has become synonymous with higher accuracy impact estimates. If meteredmore » data is used as a component in an estimating methodology, its relative contribution to accuracy can be analyzed through propagation of error or {open_quotes}POE{close_quotes} analysis. POE analysis is a framework which can be used to evaluate different metering options and their relative effects on cost and accuracy. If metered data is used to directly measure pre- and post-installation load shapes to calculate energy and demand impacts, then the accuracy of the whole metering process directly affects the accuracy of the impact estimate. This paper is devoted to the latter case, where the decision has been made to collect high-accuracy metered data of electrical energy and demand. The underlying assumption is that all meters can yield good results if applied within the scope of their limitations. The objective is to know the application, understand what meters are actually doing to measure and record power, and decide with confidence when a sophisticated meter is required, and when a less expensive type will suffice.« less
Increasing Army Supply Chain Performance: Using an Integrated End to End Metrics System
2017-01-01
Sched Deliver Sched Delinquent Contracts Current Metrics PQDR/SDRs Forecasting Accuracy Reliability Demand Management Asset Mgmt Strategies Pipeline...are identified and characterized by statistical analysis. The study proposed a framework and tool for inventory management based on factors such as
Extending the accuracy of the SNAP interatomic potential form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Mitchell A.; Thompson, Aidan P.
The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less
Extending the accuracy of the SNAP interatomic potential form
Wood, Mitchell A.; Thompson, Aidan P.
2018-03-28
The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less
Weissberger, Gali H; Strong, Jessica V; Stefanidis, Kayla B; Summers, Mathew J; Bondi, Mark W; Stricker, Nikki H
2017-12-01
With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer's dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer's disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers.
Mathematical Models for Doppler Measurements
NASA Technical Reports Server (NTRS)
Lear, William M.
1987-01-01
Error analysis increases precision of navigation. Report presents improved mathematical models of analysis of Doppler measurements and measurement errors of spacecraft navigation. To take advantage of potential navigational accuracy of Doppler measurements, precise equations relate measured cycle count to position and velocity. Drifts and random variations in transmitter and receiver oscillator frequencies taken into account. Mathematical models also adapted to aircraft navigation, radar, sonar, lidar, and interferometry.
Shi, Rong; Schraedley-Desmond, Pamela; Napel, Sandy; Olcott, Eric W; Jeffrey, R Brooke; Yee, Judy; Zalis, Michael E; Margolis, Daniel; Paik, David S; Sherbondy, Anthony J; Sundaram, Padmavathi; Beaulieu, Christopher F
2006-06-01
To retrospectively determine if three-dimensional (3D) viewing improves radiologists' accuracy in classifying true-positive (TP) and false-positive (FP) polyp candidates identified with computer-aided detection (CAD) and to determine candidate polyp features that are associated with classification accuracy, with known polyps serving as the reference standard. Institutional review board approval and informed consent were obtained; this study was HIPAA compliant. Forty-seven computed tomographic (CT) colonography data sets were obtained in 26 men and 10 women (age range, 42-76 years). Four radiologists classified 705 polyp candidates (53 TP candidates, 652 FP candidates) identified with CAD; initially, only two-dimensional images were used, but these were later supplemented with 3D rendering. Another radiologist unblinded to colonoscopy findings characterized the features of each candidate, assessed colon distention and preparation, and defined the true nature of FP candidates. Receiver operating characteristic curves were used to compare readers' performance, and repeated-measures analysis of variance was used to test features that affect interpretation. Use of 3D viewing improved classification accuracy for three readers and increased the area under the receiver operating characteristic curve to 0.96-0.97 (P<.001). For TP candidates, maximum polyp width (P=.038), polyp height (P=.019), and preparation (P=.004) significantly affected accuracy. For FP candidates, colonic segment (P=.007), attenuation (P<.001), surface smoothness (P<.001), distention (P=.034), preparation (P<.001), and true nature of candidate lesions (P<.001) significantly affected accuracy. Use of 3D viewing increases reader accuracy in the classification of polyp candidates identified with CAD. Polyp size and examination quality are significantly associated with accuracy. Copyright (c) RSNA, 2006.
Waide, Emily H; Tuggle, Christopher K; Serão, Nick V L; Schroyen, Martine; Hess, Andrew; Rowland, Raymond R R; Lunney, Joan K; Plastow, Graham; Dekkers, Jack C M
2018-02-01
Genomic prediction of the pig's response to the porcine reproductive and respiratory syndrome (PRRS) virus (PRRSV) would be a useful tool in the swine industry. This study investigated the accuracy of genomic prediction based on porcine SNP60 Beadchip data using training and validation datasets from populations with different genetic backgrounds that were challenged with different PRRSV isolates. Genomic prediction accuracy averaged 0.34 for viral load (VL) and 0.23 for weight gain (WG) following experimental PRRSV challenge, which demonstrates that genomic selection could be used to improve response to PRRSV infection. Training on WG data during infection with a less virulent PRRSV, KS06, resulted in poor accuracy of prediction for WG during infection with a more virulent PRRSV, NVSL. Inclusion of single nucleotide polymorphisms (SNPs) that are in linkage disequilibrium with a major quantitative trait locus (QTL) on chromosome 4 was vital for accurate prediction of VL. Overall, SNPs that were significantly associated with either trait in single SNP genome-wide association analysis were unable to predict the phenotypes with an accuracy as high as that obtained by using all genotyped SNPs across the genome. Inclusion of data from close relatives into the training population increased whole genome prediction accuracy by 33% for VL and by 37% for WG but did not affect the accuracy of prediction when using only SNPs in the major QTL region. Results show that genomic prediction of response to PRRSV infection is moderately accurate and, when using all SNPs on the porcine SNP60 Beadchip, is not very sensitive to differences in virulence of the PRRSV in training and validation populations. Including close relatives in the training population increased prediction accuracy when using the whole genome or SNPs other than those near a major QTL.
ROC curves in clinical chemistry: uses, misuses, and possible solutions.
Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H
2004-07-01
ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.
Hasslacher, Christoph; Kulozik, Felix; Platten, Isabel
2014-05-01
We investigated the analytical accuracy of 27 glucose monitoring systems (GMS) in a clinical setting, using the new ISO accuracy limits. In addition to measuring accuracy at blood glucose (BG) levels < 100 mg/dl and > 100 mg/dl, we also analyzed devices performance with respect to these criteria at 5 specific BG level ranges, making it possible to further differentiate between devices with regard to overall performance. Carbohydrate meals and insulin injections were used to induce an increase or decrease in BG levels in 37 insulin-dependent patients. Capillary blood samples were collected at 10-minute intervals, and BG levels determined simultaneously using GMS and a laboratory-based method. Results obtained via both methods were analyzed according to the new ISO criteria. Only 12 of 27 devices tested met overall requirements of the new ISO accuracy limits. When accuracy was assessed at BG levels < 100 mg/dl and > 100 mg/dl, criteria were met by 14 and 13 devices, respectively. A more detailed analysis involving 5 different BG level ranges revealed that 13 (48.1%) devices met the required criteria at BG levels between 50 and 150 mg/dl, whereas 19 (70.3%) met these criteria at BG levels above 250 mg/dl. The overall frequency of outliers was low. The assessment of analytical accuracy of GMS at a number of BG level ranges made it possible to further differentiate between devices with regard to overall performance, a process that is of particular importance given the user-centered nature of the devices' intended use. © 2014 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Xu, Bing; Hu, Min; Zhang, Junhui
2015-09-01
The current research about the flow ripple of axial piston pump mainly focuses on the effect of the structure of parts on the flow ripple. Therein, the structure of parts are usually designed and optimized at rated working conditions. However, the pump usually has to work in large-scale and time-variant working conditions. Therefore, the flow ripple characteristics of pump and analysis for its test accuracy with respect to variant steady-state conditions and transient conditions in a wide range of operating parameters are focused in this paper. First, a simulation model has been constructed, which takes the kinematics of oil film within friction pairs into account for higher accuracy. Afterwards, a test bed which adopts Secondary Source Method is built to verify the model. The simulation and tests results show that the angular position of the piston, corresponding to the position where the peak flow ripple is produced, varies with the different pressure. The pulsating amplitude and pulsation rate of flow ripple increase with the rise of pressure and the variation rate of pressure. For the pump working at a constant speed, the flow pulsation rate decreases dramatically with the increasing speed when the speed is less than 27.78% of the maximum speed, subsequently presents a small decrease tendency with the speed further increasing. With the rise of the variation rate of speed, the pulsating amplitude and pulsation rate of flow ripple increase. As the swash plate angle augments, the pulsating amplitude of flow ripple increases, nevertheless the flow pulsation rate decreases. In contrast with the effect of the variation of pressure, the test accuracy of flow ripple is more sensitive to the variation of speed. It makes the test accuracy above 96.20% available for the pulsating amplitude of pressure deviating within a range of ±6% from the mean pressure. However, with a variation of speed deviating within a range of ±2% from the mean speed, the attainable test accuracy of flow ripple is above 93.07%. The model constructed in this research proposes a method to determine the flow ripple characteristics of pump and its attainable test accuracy under the large-scale and time-variant working conditions. Meanwhile, a discussion about the variation of flow ripple and its obtainable test accuracy with the conditions of the pump working in wide operating ranges is given as well.
Belay, T K; Dagnachew, B S; Boison, S A; Ådnøy, T
2018-03-28
Milk infrared spectra are routinely used for phenotyping traits of interest through links developed between the traits and spectra. Predicted individual traits are then used in genetic analyses for estimated breeding value (EBV) or for phenotypic predictions using a single-trait mixed model; this approach is referred to as indirect prediction (IP). An alternative approach [direct prediction (DP)] is a direct genetic analysis of (a reduced dimension of) the spectra using a multitrait model to predict multivariate EBV of the spectral components and, ultimately, also to predict the univariate EBV or phenotype for the traits of interest. We simulated 3 traits under different genetic (low: 0.10 to high: 0.90) and residual (zero to high: ±0.90) correlation scenarios between the 3 traits and assumed the first trait is a linear combination of the other 2 traits. The aim was to compare the IP and DP approaches for predictions of EBV and phenotypes under the different correlation scenarios. We also evaluated relationships between performances of the 2 approaches and the accuracy of calibration equations. Moreover, the effect of using different regression coefficients estimated from simulated phenotypes (β p ), true breeding values (β g ), and residuals (β r ) on performance of the 2 approaches were evaluated. The simulated data contained 2,100 parents (100 sires and 2,000 cows) and 8,000 offspring (4 offspring per cow). Of the 8,000 observations, 2,000 were randomly selected and used to develop links between the first and the other 2 traits using partial least square (PLS) regression analysis. The different PLS regression coefficients, such as β p , β g , and β r , were used in subsequent predictions following the IP and DP approaches. We used BLUP analyses for the remaining 6,000 observations using the true (co)variance components that had been used for the simulation. Accuracy of prediction (of EBV and phenotype) was calculated as a correlation between predicted and true values from the simulations. The results showed that accuracies of EBV prediction were higher in the DP than in the IP approach. The reverse was true for accuracy of phenotypic prediction when using β p but not when using β g and β r , where accuracy of phenotypic prediction in the DP was slightly higher than in the IP approach. Within the DP approach, accuracies of EBV when using β g were higher than when using β p only at the low genetic correlation scenario. However, we found no differences in EBV prediction accuracy between the β p and β g in the IP approach. Accuracy of the calibration models increased with an increase in genetic and residual correlations between the traits. Performance of both approaches increased with an increase in accuracy of the calibration models. In conclusion, the DP approach is a good strategy for EBV prediction but not for phenotypic prediction, where the classical PLS regression-based equations or the IP approach provided better results. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
Wang, Hubiao; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-01-01
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1′ × 1′ marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N(u,σ2) with varying mean u and noise variance σ2. Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1–2 mGal accuracy) and the reference map (resolution 1′ × 1′; accuracy 3–8 mGal), location accuracy of IGNS was up to reach ~1.0–3.0 n miles in the South China Sea. PMID:29261136
Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-12-20
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-06-07
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.
NASA Astrophysics Data System (ADS)
Blake, Samantha L.; Walker, S. Hunter; Muddiman, David C.; Hinks, David; Beck, Keith R.
2011-12-01
Color Index Disperse Yellow 42 (DY42), a high-volume disperse dye for polyester, was used to compare the capabilities of the LTQ-Orbitrap XL and the LTQ-FT-ICR with respect to mass measurement accuracy (MMA), spectral accuracy, and sulfur counting. The results of this research will be used in the construction of a dye database for forensic purposes; the additional spectral information will increase the confidence in the identification of unknown dyes found in fibers at crime scenes. Initial LTQ-Orbitrap XL data showed MMAs greater than 3 ppm and poor spectral accuracy. Modification of several Orbitrap installation parameters (e.g., deflector voltage) resulted in a significant improvement of the data. The LTQ-FT-ICR and LTQ-Orbitrap XL (after installation parameters were modified) exhibited MMA ≤ 3 ppm, good spectral accuracy (χ2 values for the isotopic distribution ≤ 2), and were correctly able to ascertain the number of sulfur atoms in the compound at all resolving powers investigated for AGC targets of 5.00 × 105 and 1.00 × 106.
Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning
Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li
2016-01-01
Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. PMID:27273294
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Di Claudio, Gianluca; Mirzaei, Hadis; Leong, Rupert; Grisan, Enrico
2016-03-01
Celiac disease (CD) is an immune-mediated enteropathy triggered by exposure to gluten and similar proteins, affecting genetically susceptible persons, increasing their risk of different complications. Small bowels mucosa damage due to CD involves various degrees of endoscopically relevant lesions, which are not easily recognized: their overall sensitivity and positive predictive values are poor even when zoom-endoscopy is used. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to qualitative evaluate mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a method for automatically classifying CLE images into three different classes: normal regions, villous atrophy and crypt hypertrophy. This classification is performed after a features selection process, in which four features are extracted from each image, through the application of homomorphic filtering and border identification through Canny and Sobel operators. Three different classifiers have been tested on a dataset of 67 different images labeled by experts in three classes (normal, VA and CH): linear approach, Naive-Bayes quadratic approach and a standard quadratic analysis, all validated with a ten-fold cross validation. Linear classification achieves 82.09% accuracy (class accuracies: 90.32% for normal villi, 82.35% for VA and 68.42% for CH, sensitivity: 0.68, specificity 1.00), Naive Bayes analysis returns 83.58% accuracy (90.32% for normal villi, 70.59% for VA and 84.21% for CH, sensitivity: 0.84 specificity: 0.92), while the quadratic analysis achieves a final accuracy of 94.03% (96.77% accuracy for normal villi, 94.12% for VA and 89.47% for CH, sensitivity: 0.89, specificity: 0.98).
Zhang, Li; Tang, Min; Chen, Sipan; Lei, Xiaoyan; Zhang, Xiaoling; Huan, Yi
2017-12-01
This meta-analysis was undertaken to review the diagnostic accuracy of PI-RADS V2 for prostate cancer (PCa) detection with multiparametric MR (mp-MR). A comprehensive literature search of electronic databases was performed by two observers independently. Inclusion criteria were original research using the PI-RADS V2 system in reporting prostate MRI. The methodological quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Data necessary to complete 2 × 2 contingency tables were obtained from the included studies. Thirteen studies (2,049 patients) were analysed. This is an initial meta-analysis of PI-RADs V2 and the overall diagnostic accuracy in diagnosing PCa was as follows: pooled sensitivity, 0.85 (0.78-0.91); pooled specificity, 0.71 (0.60-0.80); pooled positive likelihood ratio (LR+), 2.92 (2.09-4.09); pooled negative likelihood ratio (LR-), 0.21 (0.14-0.31); pooled diagnostic odds ratio (DOR), 14.08 (7.93-25.01), respectively. Positive predictive values ranged from 0.54 to 0.97 and negative predictive values ranged from 0.26 to 0.92. Currently available evidence indicates that PI-RADS V2 appears to have good diagnostic accuracy in patients with PCa lesions with high sensitivity and moderate specificity. However, no recommendation regarding the best threshold can be provided because of heterogeneity. • PI-RADS V2 shows good diagnostic accuracy for PCa detection. • Initially pooled specificity of PI-RADS v2 remains moderate. • PCa detection is increased by experienced radiologists. • There is currently a high heterogeneity in prostate diagnostics with MRI.
Improving Genomic Prediction in Cassava Field Experiments Using Spatial Analysis.
Elias, Ani A; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc
2018-01-04
Cassava ( Manihot esculenta Crantz) is an important staple food in sub-Saharan Africa. Breeding experiments were conducted at the International Institute of Tropical Agriculture in cassava to select elite parents. Taking into account the heterogeneity in the field while evaluating these trials can increase the accuracy in estimation of breeding values. We used an exploratory approach using the parametric spatial kernels Power, Spherical, and Gaussian to determine the best kernel for a given scenario. The spatial kernel was fit simultaneously with a genomic kernel in a genomic selection model. Predictability of these models was tested through a 10-fold cross-validation method repeated five times. The best model was chosen as the one with the lowest prediction root mean squared error compared to that of the base model having no spatial kernel. Results from our real and simulated data studies indicated that predictability can be increased by accounting for spatial variation irrespective of the heritability of the trait. In real data scenarios we observed that the accuracy can be increased by a median value of 3.4%. Through simulations, we showed that a 21% increase in accuracy can be achieved. We also found that Range (row) directional spatial kernels, mostly Gaussian, explained the spatial variance in 71% of the scenarios when spatial correlation was significant. Copyright © 2018 Elias et al.
Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein
2015-01-01
Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333
Macera, Annalisa; Lario, Chiara; Petracchini, Massimo; Gallo, Teresa; Regge, Daniele; Floriani, Irene; Ribero, Dario; Capussotti, Lorenzo; Cirillo, Stefano
2013-03-01
To compare the diagnostic accuracy and sensitivity of Gd-EOB-DTPA MRI and diffusion-weighted (DWI) imaging alone and in combination for detecting colorectal liver metastases in patients who had undergone preoperative chemotherapy. Thirty-two consecutive patients with a total of 166 liver lesions were retrospectively enrolled. Of the lesions, 144 (86.8 %) were metastatic at pathology. Three image sets (1, Gd-EOB-DTPA; 2, DWI; 3, combined Gd-EOB-DTPA and DWI) were independently reviewed by two observers. Statistical analysis was performed on a per-lesion basis. Evaluation of image set 1 correctly identified 127/166 lesions (accuracy 76.5 %; 95 % CI 69.3-82.7) and 106/144 metastases (sensitivity 73.6 %, 95 % CI 65.6-80.6). Evaluation of image set 2 correctly identified 108/166 (accuracy 65.1 %, 95 % CI 57.3-72.3) and 87/144 metastases (sensitivity of 60.4 %, 95 % CI 51.9-68.5). Evaluation of image set 3 correctly identified 148/166 (accuracy 89.2 %, 95 % CI 83.4-93.4) and 131/144 metastases (sensitivity 91 %, 95 % CI 85.1-95.1). Differences were statistically significant (P < 0.001). Notably, similar results were obtained analysing only small lesions (<1 cm). The combination of DWI with Gd-EOB-DTPA-enhanced MRI imaging significantly increases the diagnostic accuracy and sensitivity in patients with colorectal liver metastases treated with preoperative chemotherapy, and it is particularly effective in the detection of small lesions.
Game theoretic approach for cooperative feature extraction in camera networks
NASA Astrophysics Data System (ADS)
Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco
2016-07-01
Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.
Liu, Hao; Chen, Weikai; Liu, Tao; Meng, Bin; Yang, Huilin
2017-01-01
To investigate the accuracy of pedicle screw placement based on preoperative computed tomography in comparison with intraoperative data set acquisition for spinal navigation system. The PubMed (MEDLINE), EMBASE, and Web of Science were systematically searched for the literature published up to September 2015. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-analysis guidelines. Statistical analysis was performed using the Review Manager 5.3. The dichotomous data for the pedicle violation rate was summarized using relative risk (RR) and 95% confidence intervals (CIs) with the fixed-effects model. The level of significance was set at p < 0.05. For this meta-analysis, seven studies used a total of 579 patients and 2981 screws. The results revealed that the accuracy of intraoperative data set acquisition method is significantly higher than preoperative one using 2 mm grading criteria (RR: 1.82, 95% CI: 1.09, 3.04, I 2 = 0%, p = 0.02). However, there was no significant difference between two kinds of methods at the 0 mm grading criteria (RR: 1.13, 95% CI: 0.88, 1.46, I 2 = 17%, p = 0.34). Using the 2-mm grading criteria, there was a higher accuracy of pedicle screw insertion in O-arm-assisted navigation than CT-based navigation method (RR: 1.96, 95% CI: 1.05, 3.64, I 2 = 0%, p = 0.03). The accuracy between CT-based navigation and two-dimensional-based navigation showed no significant difference (RR: 1.02, 95% CI: 0.35-3.03, I 2 = 0%, p = 0.97). The intraoperative data set acquisition method may decrease the incidence of perforated screws over 2 mm but not increase the number of screws fully contained within the pedicle compared to preoperative CT-based navigation system. A significantly higher accuracy of intraoperative (O-arm) than preoperative CT-based navigation was revealed using 2 mm grading criteria.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
DiCanio, Christian; Nam, Hosung; Whalen, Douglas H.; Timothy Bunnell, H.; Amith, Jonathan D.; García, Rey Castillo
2013-01-01
While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones. PMID:23967953
DiCanio, Christian; Nam, Hosung; Whalen, Douglas H; Bunnell, H Timothy; Amith, Jonathan D; García, Rey Castillo
2013-09-01
While efforts to document endangered languages have steadily increased, the phonetic analysis of endangered language data remains a challenge. The transcription of large documentation corpora is, by itself, a tremendous feat. Yet, the process of segmentation remains a bottleneck for research with data of this kind. This paper examines whether a speech processing tool, forced alignment, can facilitate the segmentation task for small data sets, even when the target language differs from the training language. The authors also examined whether a phone set with contextualization outperforms a more general one. The accuracy of two forced aligners trained on English (hmalign and p2fa) was assessed using corpus data from Yoloxóchitl Mixtec. Overall, agreement performance was relatively good, with accuracy at 70.9% within 30 ms for hmalign and 65.7% within 30 ms for p2fa. Segmental and tonal categories influenced accuracy as well. For instance, additional stop allophones in hmalign's phone set aided alignment accuracy. Agreement differences between aligners also corresponded closely with the types of data on which the aligners were trained. Overall, using existing alignment systems was found to have potential for making phonetic analysis of small corpora more efficient, with more allophonic phone sets providing better agreement than general ones.
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
Biglands, John D; Ibraheem, Montasir; Magee, Derek R; Radjenovic, Aleksandra; Plein, Sven; Greenwood, John P
2018-05-01
This study sought to compare the diagnostic accuracy of visual and quantitative analyses of myocardial perfusion cardiovascular magnetic resonance against a reference standard of quantitative coronary angiography. Visual analysis of perfusion cardiovascular magnetic resonance studies for assessing myocardial perfusion has been shown to have high diagnostic accuracy for coronary artery disease. However, only a few small studies have assessed the diagnostic accuracy of quantitative myocardial perfusion. This retrospective study included 128 patients randomly selected from the CE-MARC (Clinical Evaluation of Magnetic Resonance Imaging in Coronary Heart Disease) study population such that the distribution of risk factors and disease status was proportionate to the full population. Visual analysis results of cardiovascular magnetic resonance perfusion images, by consensus of 2 expert readers, were taken from the original study reports. Quantitative myocardial blood flow estimates were obtained using Fermi-constrained deconvolution. The reference standard for myocardial ischemia was a quantitative coronary x-ray angiogram stenosis severity of ≥70% diameter in any coronary artery of >2 mm diameter, or ≥50% in the left main stem. Diagnostic performance was calculated using receiver-operating characteristic curve analysis. The area under the curve for visual analysis was 0.88 (95% confidence interval: 0.81 to 0.95) with a sensitivity of 81.0% (95% confidence interval: 69.1% to 92.8%) and specificity of 86.0% (95% confidence interval: 78.7% to 93.4%). For quantitative stress myocardial blood flow the area under the curve was 0.89 (95% confidence interval: 0.83 to 0.96) with a sensitivity of 87.5% (95% confidence interval: 77.3% to 97.7%) and specificity of 84.5% (95% confidence interval: 76.8% to 92.3%). There was no statistically significant difference between the diagnostic performance of quantitative and visual analyses (p = 0.72). Incorporating rest myocardial blood flow values to generate a myocardial perfusion reserve did not significantly increase the quantitative analysis area under the curve (p = 0.79). Quantitative perfusion has a high diagnostic accuracy for detecting coronary artery disease but is not superior to visual analysis. The incorporation of rest perfusion imaging does not improve diagnostic accuracy in quantitative perfusion analysis. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Siahpolo, Navid; Gerami, Mohsen; Vahdani, Reza
2016-09-01
Evaluating the capability of elastic Load Patterns (LPs) including seismic codes and modified LPs such as Method of Modal Combination (MMC) and Upper Bound Pushover Analysis (UBPA) in estimating inelastic demands of non deteriorating steel moment frames is the main objective of this study. The Static Nonlinear Procedure (NSP) is implemented and the results of NSP are compared with Nonlinear Time History Analysis (NTHA). The focus is on the effects of near-fault pulselike ground motions. The primary demands of interest are the maximum floor displacement, the maximum story drift angle over the height, the maximum global ductility, the maximum inter-story ductility and the capacity curves. Five types of LPs are selected and the inelastic demands are calculated under four levels of inter-story target ductility ( μ t) using OpenSees software. The results show that the increase in μ t coincides with the migration of the peak demands over the height from the top to the bottom stories. Therefore, all LPs estimate the story lateral displacement accurately at the lower stories. The results are almost independent of the number of stories. While, the inter-story drift angle (IDR) obtained from MMC method has the most appropriate accuracy among the other LPs. Although, the accuracy of this method decreases with increasing μ t so that with increasing number of stories, IDR is smaller or greater than the values resulted from NTHA depending on the position of captured results. In addition, increasing μ t decreases the accuracy of all LPs in determination of critical story position. In this case, the MMC method has the best coincidence with distribution of inter-story ductility over the height.
Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A.
2015-01-01
Background Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Methods Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). Results For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. Conclusions This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions. PMID:26132293
Amen, Daniel G; Raji, Cyrus A; Willeumier, Kristen; Taylor, Derek; Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A
2015-01-01
Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions.
ECG Rhythm Analysis with Expert and Learner-Generated Schemas in Novice Learners
ERIC Educational Resources Information Center
Blissett, Sarah; Cavalcanti, Rodrigo; Sibbald, Matthew
2015-01-01
Although instruction using expert-generated schemas is associated with higher diagnostic performance, implementation is resource intensive. Learner-generated schemas are an alternative, but may be limited by increases in cognitive load. We compared expert- and learner-generated schemas for learning ECG rhythm interpretation on diagnostic accuracy,…
Deep Learning Method for Denial of Service Attack Detection Based on Restricted Boltzmann Machine.
Imamverdiyev, Yadigar; Abdullayeva, Fargana
2018-06-01
In this article, the application of the deep learning method based on Gaussian-Bernoulli type restricted Boltzmann machine (RBM) to the detection of denial of service (DoS) attacks is considered. To increase the DoS attack detection accuracy, seven additional layers are added between the visible and the hidden layers of the RBM. Accurate results in DoS attack detection are obtained by optimization of the hyperparameters of the proposed deep RBM model. The form of the RBM that allows application of the continuous data is used. In this type of RBM, the probability distribution of the visible layer is replaced by a Gaussian distribution. Comparative analysis of the accuracy of the proposed method with Bernoulli-Bernoulli RBM, Gaussian-Bernoulli RBM, deep belief network type deep learning methods on DoS attack detection is provided. Detection accuracy of the methods is verified on the NSL-KDD data set. Higher accuracy from the proposed multilayer deep Gaussian-Bernoulli type RBM is obtained.
NASA Astrophysics Data System (ADS)
Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore
2017-10-01
This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.
Validation of luminescent source reconstruction using spectrally resolved bioluminescence images
NASA Astrophysics Data System (ADS)
Virostko, John M.; Powers, Alvin C.; Jansen, E. D.
2008-02-01
This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.
Sensitivity analysis of pulse pileup model parameter in photon counting detectors
NASA Astrophysics Data System (ADS)
Shunhavanich, Picha; Pelc, Norbert J.
2017-03-01
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
van Wijck, Kim; Bessems, Babs AFM; van Eijk, Hans MH; Buurman, Wim A; Dejong, Cornelis HC; Lenaerts, Kaatje
2012-01-01
Background Increased intestinal permeability is an important measure of disease activity and prognosis. Currently, many permeability tests are available and no consensus has been reached as to which test is most suitable. The aim of this study was to compare urinary probe excretion and accuracy of a polyethylene glycol (PEG) assay and dual sugar assay in a double-blinded crossover study to evaluate probe excretion and the accuracy of both tests. Methods Gastrointestinal permeability was measured in nine volunteers using PEG 400, PEG 1500, and PEG 3350 or lactulose-rhamnose. On 4 separate days, permeability was analyzed after oral intake of placebo or indomethacin, a drug known to increase intestinal permeability. Plasma intestinal fatty acid binding protein and calprotectin levels were determined to verify compromised intestinal integrity after indomethacin consumption. Urinary samples were collected at baseline, hourly up to 5 hours after probe intake, and between 5 and 24 hours. Urinary excretion of PEG and sugars was determined using high-pressure liquid chromatography-evaporative light scattering detection and liquid chromatography-mass spectrometry, respectively. Results Intake of indomethacin increased plasma intestinal fatty acid-binding protein and calprotectin levels, reflecting loss of intestinal integrity and inflammation. In this state of indomethacin-induced gastrointestinal compromise, urinary excretion of the three PEG probes and lactulose increased compared with placebo. Urinary PEG 400 excretion, the PEG 3350/PEG 400 ratio, and the lactulose/rhamnose ratio could accurately detect indomethacin-induced increases in gastrointestinal permeability, especially within 2 hours of probe intake. Conclusion Hourly urinary excretion and diagnostic accuracy of PEG and sugar probes show high concordance for detection of indomethacin-induced increases in gastrointestinal permeability. This comparative study improves our knowledge of permeability analysis in man by providing a clear overview of both tests and demonstrates equivalent performance in the current setting. PMID:22888267
Geolocation and Pointing Accuracy Analysis for the WindSat Sensor
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.
2006-01-01
Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.
Iannaccone, Mario; Gili, Sebastiano; De Filippo, Ovidio; D'Amico, Salvatore; Gagliardi, Marco; Bertaina, Maurizio; Mazzilli, Silvia; Rettegno, Sara; Bongiovanni, Federica; Gatti, Paolo; Ugo, Fabrizio; Boccuzzi, Giacomo G; Colangelo, Salvatore; Prato, Silvia; Moretti, Claudio; D'Amico, Maurizio; Noussan, Patrizia; Garbo, Roberto; Hildick-Smith, David; Gaita, Fiorenzo; D'Ascenzo, Fabrizio
2018-01-01
Non-invasive ischaemia tests and biomarkers are widely adopted to rule out acute coronary syndrome in the emergency department. Their diagnostic accuracy has yet to be precisely defined. Medline, Cochrane Library CENTRAL, EMBASE and Biomed Central were systematically screened (start date 1 September 2016, end date 1 December 2016). Prospective studies (observational or randomised controlled trial) comparing functional/imaging or biochemical tests for patients presenting with chest pain to the emergency department were included. Overall, 77 studies were included, for a total of 49,541 patients (mean age 59.9 years). Fast and six-hour highly sensitive troponin T protocols did not show significant differences in their ability to detect acute coronary syndromes, as they reported a sensitivity and specificity of 0.89 (95% confidence interval 0.79-0.94) and 0.84 (0.74-0.9) vs 0.89 (0.78-0.94) and 0.83 (0.70-0.92), respectively. The addition of copeptin to troponin increased sensitivity and reduced specificity, without improving diagnostic accuracy. The diagnostic value of non-invasive tests for patients without troponin increase was tested. Coronary computed tomography showed the highest level of diagnostic accuracy (sensitivity 0.93 (0.81-0.98) and specificity 0.90 (0.93-0.94)), along with myocardial perfusion scintigraphy (sensitivity 0.85 (0.77-0.91) and specificity 0.92 (0.83-0.96)). Stress echography was inferior to coronary computed tomography but non-inferior to myocardial perfusion scintigraphy, while exercise testing showed the lower level of diagnostic accuracy. Fast and six-hour highly sensitive troponin T protocols provide an overall similar level of diagnostic accuracy to detect acute coronary syndrome. Among the non-invasive ischaemia tests for patients without troponin increase, coronary computed tomography and myocardial perfusion scintigraphy showed the highest sensitivity and specificity.
Accuracy analysis of point cloud modeling for evaluating concrete specimens
NASA Astrophysics Data System (ADS)
D'Amico, Nicolas; Yu, Tzuyang
2017-04-01
Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.
Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.
Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru
2013-01-01
Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
Increasing Deception Detection Accuracy with Strategic Questioning
ERIC Educational Resources Information Center
Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.
2010-01-01
One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…
Dall'Ara, E; Barber, D; Viceconti, M
2014-09-22
The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.
Braun, Tobias; Grüneberg, Christian; Thiel, Christian
2018-04-01
Routine screening for frailty could be used to timely identify older people with increased vulnerability und corresponding medical needs. The aim of this study was the translation and cross-cultural adaptation of the PRISMA-7 questionnaire, the FRAIL scale and the Groningen Frailty Indicator (GFI) into the German language as well as a preliminary analysis of the diagnostic test accuracy of these instruments used to screen for frailty. A diagnostic cross-sectional study was performed. The instrument translation into German followed a standardized process. Prefinal versions were clinically tested on older adults who gave structured in-depth feedback on the scales in order to compile a final revision of the German language scale versions. For the analysis of diagnostic test accuracy (criterion validity), PRISMA-7, FRAIL scale and GFI were considered the index tests. Two reference tests were applied to assess frailty, either based on Fried's model of a Physical Frailty Phenotype or on the model of deficit accumulation, expressed in a Frailty Index. Prefinal versions of the German translations of each instrument were produced and completed by 52 older participants (mean age: 73 ± 6 years). Some minor issues concerning comprehensibility and semantics of the scales were identified and resolved. Using the Physical Frailty Phenotype (frailty prevalence: 4%) criteria as a reference standard, the accuracy of the instruments was excellent (area under the curve AUC >0.90). Taking the Frailty Index (frailty prevalence: 23%) as the reference standard, the accuracy was good (AUC between 0.73 and 0.88). German language versions of PRISMA-7, FRAIL scale and GFI have been established and preliminary results indicate sufficient diagnostic test accuracy that needs to be further established.
Iwazawa, J; Ohue, S; Hashimoto, N; Mitani, T
2014-02-01
To compare the accuracy of computer software analysis using three different target-definition protocols to detect tumour feeder vessels for transarterial chemoembolization of hepatocellular carcinoma. C-arm computed tomography (CT) data were analysed for 81 tumours from 57 patients who had undergone chemoembolization using software-assisted detection of tumour feeders. Small, medium, and large-sized targets were manually defined for each tumour. The tumour feeder was verified when the target tumour was enhanced on selective C-arm CT of the investigated vessel during chemoembolization. The sensitivity, specificity, and accuracy of the three protocols were evaluated and compared. One hundred and eight feeder vessels supplying 81 lesions were detected. The sensitivity of the small, medium, and large target protocols was 79.8%, 91.7%, and 96.3%, respectively; specificity was 95%, 88%, and 50%, respectively; and accuracy was 87.5%, 89.9%, and 74%, respectively. The sensitivity was significantly higher for the medium (p = 0.003) and large (p < 0.001) target protocols than for the small target protocol. The specificity and accuracy were higher for the small (p < 0.001 and p < 0.001, respectively) and medium (p < 0.001 and p < 0.001, respectively) target protocols than for the large target protocol. The overall accuracy of software-assisted automated feeder analysis in transarterial chemoembolization for hepatocellular carcinoma is affected by the target definition size. A large target definition increases sensitivity and decreases specificity in detecting tumour feeders. A target size equivalent to the tumour size most accurately predicts tumour feeders. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Modi, Payal; Glavis-Bloom, Justin; Nasrin, Sabiha; Guy, Allysia; Chowa, Erika P; Dvor, Nathan; Dworkis, Daniel A; Oh, Michael; Silvestri, David M; Strasberg, Stephen; Rege, Soham; Noble, Vicki E; Alam, Nur H; Levine, Adam C
2016-01-01
Although dehydration from diarrhea is a leading cause of morbidity and mortality in children under five, existing methods of assessing dehydration status in children have limited accuracy. To assess the accuracy of point-of-care ultrasound measurement of the aorta-to-IVC ratio as a predictor of dehydration in children. A prospective cohort study of children under five years with acute diarrhea was conducted in the rehydration unit of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b). Ultrasound measurements of aorta-to-IVC ratio and dehydrated weight were obtained on patient arrival. Percent weight change was monitored during rehydration to classify children as having "some dehydration" with weight change 3-9% or "severe dehydration" with weight change > 9%. Logistic regression analysis and Receiver-Operator Characteristic (ROC) curves were used to evaluate the accuracy of aorta-to-IVC ratio as a predictor of dehydration severity. 850 children were enrolled, of which 771 were included in the final analysis. Aorta to IVC ratio was a significant predictor of the percent dehydration in children with acute diarrhea, with each 1-point increase in the aorta to IVC ratio predicting a 1.1% increase in the percent dehydration of the child. However, the area under the ROC curve (0.60), sensitivity (67%), and specificity (49%), for predicting severe dehydration were all poor. Point-of-care ultrasound of the aorta-to-IVC ratio was statistically associated with volume status, but was not accurate enough to be used as an independent screening tool for dehydration in children under five years presenting with acute diarrhea in a resource-limited setting.
Modi, Payal; Glavis-Bloom, Justin; Nasrin, Sabiha; Guy, Allysia; Rege, Soham; Noble, Vicki E.; Alam, Nur H.; Levine, Adam C.
2016-01-01
Introduction Although dehydration from diarrhea is a leading cause of morbidity and mortality in children under five, existing methods of assessing dehydration status in children have limited accuracy. Objective To assess the accuracy of point-of-care ultrasound measurement of the aorta-to-IVC ratio as a predictor of dehydration in children. Methods A prospective cohort study of children under five years with acute diarrhea was conducted in the rehydration unit of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b). Ultrasound measurements of aorta-to-IVC ratio and dehydrated weight were obtained on patient arrival. Percent weight change was monitored during rehydration to classify children as having “some dehydration” with weight change 3–9% or “severe dehydration” with weight change > 9%. Logistic regression analysis and Receiver-Operator Characteristic (ROC) curves were used to evaluate the accuracy of aorta-to-IVC ratio as a predictor of dehydration severity. Results 850 children were enrolled, of which 771 were included in the final analysis. Aorta to IVC ratio was a significant predictor of the percent dehydration in children with acute diarrhea, with each 1-point increase in the aorta to IVC ratio predicting a 1.1% increase in the percent dehydration of the child. However, the area under the ROC curve (0.60), sensitivity (67%), and specificity (49%), for predicting severe dehydration were all poor. Conclusions Point-of-care ultrasound of the aorta-to-IVC ratio was statistically associated with volume status, but was not accurate enough to be used as an independent screening tool for dehydration in children under five years presenting with acute diarrhea in a resource-limited setting. PMID:26766306
Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth
2009-02-01
The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.
Grande, Antonio Jose; Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C
2016-08-01
Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Two independent authors extracted data using a standardized form. A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66-0.74). The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. The protocol for this systematic review was registered at CRD42015020323.
Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.
Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan
2018-04-01
Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.
Andrews, Kimberly R; Adams, Jennifer R; Cassirer, E Frances; Plowright, Raina K; Gardner, Colby; Dwire, Maggie; Hohenlohe, Paul A; Waits, Lisette P
2018-06-05
The development of high-throughput sequencing technologies is dramatically increasing the use of single nucleotide polymorphisms (SNPs) across the field of genetics, but most parentage studies of wild populations still rely on microsatellites. We developed a bioinformatic pipeline for identifying SNP panels that are informative for parentage analysis from restriction site-associated DNA sequencing (RADseq) data. This pipeline includes options for analysis with or without a reference genome, and provides methods to maximize genotyping accuracy and select sets of unlinked loci that have high statistical power. We test this pipeline on small populations of Mexican gray wolf and bighorn sheep, for which parentage analyses are expected to be challenging due to low genetic diversity and the presence of many closely related individuals. We compare the results of parentage analysis across SNP panels generated with or without the use of a reference genome, and between SNPs and microsatellites. For Mexican gray wolf, we conducted parentage analyses for 30 pups from a single cohort where samples were available from 64% of possible mothers and 53% of possible fathers, and the accuracy of parentage assignments could be estimated because true identities of parents were known a priori based on field data. For bighorn sheep, we conducted maternity analyses for 39 lambs from five cohorts where 77% of possible mothers were sampled, but true identities of parents were unknown. Analyses with and without a reference genome produced SNP panels with >95% parentage assignment accuracy for Mexican gray wolf, outperforming microsatellites at 78% accuracy. Maternity assignments were completely consistent across all SNP panels for the bighorn sheep, and were 74.4% consistent with assignments from microsatellites. Accuracy and consistency of parentage analysis were not reduced when using as few as 284 SNPs for Mexican gray wolf and 142 SNPs for bighorn sheep, indicating our pipeline can be used to develop SNP genotyping assays for parentage analysis with relatively small numbers of loci. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Stenz, Ulrich; Hartmann, Jens; Paffenholz, Jens-André; Neumann, Ingo
2017-08-16
Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less.
NASA Technical Reports Server (NTRS)
Fagan, Matthew E.; Defries, Ruth S.; Sesnie, Steven E.; Arroyo-Mora, J. Pablo; Soto, Carlomagno; Singh, Aditya; Townsend, Philip A.; Chazdon, Robin L.
2015-01-01
An efficient means to map tree plantations is needed to detect tropical land use change and evaluate reforestation projects. To analyze recent tree plantation expansion in northeastern Costa Rica, we examined the potential of combining moderate-resolution hyperspectral imagery (2005 HyMap mosaic) with multitemporal, multispectral data (Landsat) to accurately classify (1) general forest types and (2) tree plantations by species composition. Following a linear discriminant analysis to reduce data dimensionality, we compared four Random Forest classification models: hyperspectral data (HD) alone; HD plus interannual spectral metrics; HD plus a multitemporal forest regrowth classification; and all three models combined. The fourth, combined model achieved overall accuracy of 88.5%. Adding multitemporal data significantly improved classification accuracy (p less than 0.0001) of all forest types, although the effect on tree plantation accuracy was modest. The hyperspectral data alone classified six species of tree plantations with 75% to 93% producer's accuracy; adding multitemporal spectral data increased accuracy only for two species with dense canopies. Non-native tree species had higher classification accuracy overall and made up the majority of tree plantations in this landscape. Our results indicate that combining occasionally acquired hyperspectral data with widely available multitemporal satellite imagery enhances mapping and monitoring of reforestation in tropical landscapes.
NASA Astrophysics Data System (ADS)
Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue
2015-04-01
Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.
Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound
NASA Astrophysics Data System (ADS)
Galperin, Michael
2003-05-01
A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.
Islam, Md Rabiul; Tanaka, Toshihisa; Molla, Md Khademul Islam
2018-05-08
When designing multiclass motor imagery-based brain-computer interface (MI-BCI), a so-called tangent space mapping (TSM) method utilizing the geometric structure of covariance matrices is an effective technique. This paper aims to introduce a method using TSM for finding accurate operational frequency bands related brain activities associated with MI tasks. A multichannel electroencephalogram (EEG) signal is decomposed into multiple subbands, and tangent features are then estimated on each subband. A mutual information analysis-based effective algorithm is implemented to select subbands containing features capable of improving motor imagery classification accuracy. Thus obtained features of selected subbands are combined to get feature space. A principal component analysis-based approach is employed to reduce the features dimension and then the classification is accomplished by a support vector machine (SVM). Offline analysis demonstrates the proposed multiband tangent space mapping with subband selection (MTSMS) approach outperforms state-of-the-art methods. It acheives the highest average classification accuracy for all datasets (BCI competition dataset 2a, IIIa, IIIb, and dataset JK-HH1). The increased classification accuracy of MI tasks with the proposed MTSMS approach can yield effective implementation of BCI. The mutual information-based subband selection method is implemented to tune operation frequency bands to represent actual motor imagery tasks.
Leaché, Adam D.; Banbury, Barbara L.; Felsenstein, Joseph; de Oca, Adrián nieto-Montes; Stamatakis, Alexandros
2015-01-01
Single nucleotide polymorphisms (SNPs) are useful markers for phylogenetic studies owing in part to their ubiquity throughout the genome and ease of collection. Restriction site associated DNA sequencing (RADseq) methods are becoming increasingly popular for SNP data collection, but an assessment of the best practises for using these data in phylogenetics is lacking. We use computer simulations, and new double digest RADseq (ddRADseq) data for the lizard family Phrynosomatidae, to investigate the accuracy of RAD loci for phylogenetic inference. We compare the two primary ways RAD loci are used during phylogenetic analysis, including the analysis of full sequences (i.e., SNPs together with invariant sites), or the analysis of SNPs on their own after excluding invariant sites. We find that using full sequences rather than just SNPs is preferable from the perspectives of branch length and topological accuracy, but not of computational time. We introduce two new acquisition bias corrections for dealing with alignments composed exclusively of SNPs, a conditional likelihood method and a reconstituted DNA approach. The conditional likelihood method conditions on the presence of variable characters only (the number of invariant sites that are unsampled but known to exist is not considered), while the reconstituted DNA approach requires the user to specify the exact number of unsampled invariant sites prior to the analysis. Under simulation, branch length biases increase with the amount of missing data for both acquisition bias correction methods, but branch length accuracy is much improved in the reconstituted DNA approach compared to the conditional likelihood approach. Phylogenetic analyses of the empirical data using concatenation or a coalescent-based species tree approach provide strong support for many of the accepted relationships among phrynosomatid lizards, suggesting that RAD loci contain useful phylogenetic signal across a range of divergence times despite the presence of missing data. Phylogenetic analysis of RAD loci requires careful attention to model assumptions, especially if downstream analyses depend on branch lengths. PMID:26227865
Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.
2015-01-01
Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208
NASA Astrophysics Data System (ADS)
Liebold, F.; Maas, H.-G.
2018-05-01
This paper deals with the determination of crack widths of concrete beams during load tests from monocular image sequences. The procedure starts in a reference image of the probe with suitable surface texture under zero load, where a large number of points is defined by an interest operator. Then a triangulated irregular network is established to connect the points. Image sequences are recorded during load tests with the load increasing continuously or stepwise, or at intermittently changing load. The vertices of the triangles are tracked through the consecutive images of the sequence with sub-pixel accuracy by least squares matching. All triangles are then analyzed for changes by principal strain calculation. For each triangle showing significant strain, a crack width is computed by a thorough geometric analysis of the relative movement of the vertices.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1-98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O`Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1–98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting. PMID:26262681
Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.
2009-01-01
Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.
Fuller, Douglas O; Parenti, Michael S; Gad, Adel M; Beier, John C
2012-01-01
Irrigation along the Nile River has resulted in dramatic changes in the biophysical environment of Upper Egypt. In this study we used a combination of MODIS 250 m NDVI data and Landsat imagery to identify areas that changed from 2001-2008 as a result of irrigation and water-level fluctuations in the Nile River and nearby water bodies. We used two different methods of time series analysis -- principal components (PCA) and harmonic decomposition (HD), applied to the MODIS 250 m NDVI images to derive simple three-class land cover maps and then assessed their accuracy using a set of reference polygons derived from 30 m Landsat 5 and 7 imagery. We analyzed our MODIS 250 m maps against a new MODIS global land cover product (MOD12Q1 collection 5) to assess whether regionally specific mapping approaches are superior to a standard global product. Results showed that the accuracy of the PCA-based product was greater than the accuracy of either the HD or MOD12Q1 products for the years 2001, 2003, and 2008. However, the accuracy of the PCA product was only slightly better than the MOD12Q1 for 2001 and 2003. Overall, the results suggest that our PCA-based approach produces a high level of user and producer accuracies, although the MOD12Q1 product also showed consistently high accuracy. Overlay of 2001-2008 PCA-based maps showed a net increase of 12 129 ha of irrigated vegetation, with the largest increase found from 2006-2008 around the Districts of Edfu and Kom Ombo. This result was unexpected in light of ambitious government plans to develop 336 000 ha of irrigated agriculture around the Toshka Lakes.
[Enzymatic analysis of the quality of foodstuffs].
Kolesnov, A Iu
1997-01-01
Enzymatic analysis is an independent and separate branch of enzymology and analytical chemistry. It has become one of the most important methodologies used in food analysis. Enzymatic analysis allows the quick, reliable determination of many food ingredients. Often these contents cannot be determined by conventional methods, or if methods are available, they are determined only with limited accuracy. Today, methods of enzymatic analysis are being increasingly used in the investigation of foodstuffs. Enzymatic measurement techniques are used in industry, scientific and food inspection laboratories for quality analysis. This article describes the requirements of an optimal analytical method: specificity, sample preparation, assay performance, precision, sensitivity, time requirement, analysis cost, safety of reagents.
Alaska national hydrography dataset positional accuracy assessment study
Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy
2013-01-01
Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc
2013-06-01
To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.
Submillisecond Optical Knife-Edge Testing
NASA Technical Reports Server (NTRS)
Thurlow, P.
1983-01-01
Fast computer-controlled sampling of optical knife-edge response (KER) signal increases accuracy of optical system aberration measurement. Submicrosecond-response detectors in optical focal plane convert optical signals to electrical signals converted to digital data, sampled and feed into computer for storage and subsequent analysis. Optical data are virtually free of effects of index-of-refraction gradients.
USDA-ARS?s Scientific Manuscript database
Single-step Genomic Best Linear Unbiased Predictor (ssGBLUP) has become increasingly popular for whole-genome prediction (WGP) modeling as it utilizes any available pedigree and phenotypes on both genotyped and non-genotyped individuals. The WGP accuracy of ssGBLUP has been demonstrated to be greate...
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Astrophysics Data System (ADS)
Bratic, G.; Brovelli, M. A.; Molinari, M. E.
2018-04-01
The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
Evaluation and attribution of OCO-2 XCO2 uncertainties
NASA Astrophysics Data System (ADS)
Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin
2017-07-01
Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods
comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.
Algorithms and Object-Oriented Software for Distributed Physics-Based Modeling
NASA Technical Reports Server (NTRS)
Kenton, Marc A.
2001-01-01
The project seeks to develop methods to more efficiently simulate aerospace vehicles. The goals are to reduce model development time, increase accuracy (e.g.,by allowing the integration of multidisciplinary models), facilitate collaboration by geographically- distributed groups of engineers, support uncertainty analysis and optimization, reduce hardware costs, and increase execution speeds. These problems are the subject of considerable contemporary research (e.g., Biedron et al. 1999; Heath and Dick, 2000).
Elly E. Holcombe; Duane G. Moore; Richard L. Fredriksen
1986-01-01
A modification of the macro-Kjeldahl method that provides increased sensitivity was developed for determining very low levels of nitrogen in forest streams and in rain-water. The method is suitable as a routine laboratory procedure. Analytical range of the method is 0.02 to 1.5 mg/L with high recovery and excellent precision and ac-curacy. The range can be increased to...
Chemically intuited, large-scale screening of MOFs by machine learning techniques
NASA Astrophysics Data System (ADS)
Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.
2017-10-01
A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.
Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.
Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A
2018-05-14
Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.
NASA Technical Reports Server (NTRS)
Carpenter, Paul
2003-01-01
Electron-probe microanalysis standards and issues related to measurement and accuracy of microanalysis will be discussed. Critical evaluation of standards based on homogeneity and comparison with wet-chemical analysis will be made. Measurement problems such as spectrometer dead-time will be discussed. Analytical accuracy issues will be evaluated for systems by alpha-factor analysis and comparison with experimental k-ratio databases.
NASA Astrophysics Data System (ADS)
Lee, Jae-Seung; Im, In-Chul; Kang, Su-Man; Goo, Eun-Hoe; Baek, Seong-Min
2013-11-01
The purpose of this study is to present a new method of quality assurance (QA) in order to ensure effective evaluation of the accuracy of respiratory-gated radiotherapy (RGR). This would help in quantitatively analyzing the patient's respiratory cycle and respiration-induced tumor motion and in performing a subsequent comparative analysis of dose distributions, using the gamma-index method, as reproduced in our in-house developed respiration-simulating phantom. Therefore, we designed a respiration-simulating phantom capable of reproducing the patient's respiratory cycle and respiration-induced tumor motion and evaluated the accuracy of RGR by estimating its pass rates. We applied the gamma index passing criteria of accepted error ranges of 3% and 3 mm for the dose distribution calculated by using the treatment planning system (TPS) and the actual dose distribution of RGR. The pass rate clearly increased inversely to the gating width chosen. When respiration-induced tumor motion was 12 mm or less, pass rates of 85% and above were achieved for the 30-70% respiratory phase, and pass rates of 90% and above were achieved for the 40-60% respiratory phase. However, a respiratory cycle with a very small fluctuation range of pass rates failed to prove reliable in evaluating the accuracy of RGR. Therefore, accurate and reliable outcomes of radiotherapy will be obtainable only by establishing a novel QA system using the respiration-simulating phantom, the gamma-index analysis, and a quantitative analysis of diaphragmatic motion, enabling an indirect measurement of tumor motion.
NASA Astrophysics Data System (ADS)
Zomlot, Z.; Verbeiren, B.; Huysmans, M.; Batelaan, O.
2017-11-01
Land use/land cover (LULC) change is a consequence of human-induced global environmental change. It is also considered one of the major factors affecting groundwater recharge. Uncertainties and inconsistencies in LULC maps are one of the difficulties that LULC timeseries analysis face and which have a significant effect on hydrological impact analysis. Therefore, an accuracy assessment approach of LULC timeseries is needed for a more reliable hydrological analysis and prediction. The objective of this paper is to assess the impact of land use uncertainty and to improve the accuracy of a timeseries of CORINE (coordination of information on the environment) land cover maps by using a new approach of identifying spatial-temporal LULC change trajectories as a pre-processing tool. This ensures consistency of model input when dealing with land-use dynamics and as such improves the accuracy of land use maps and consequently groundwater recharge estimation. As a case study the impact of consistent land use changes from 1990 until 2013 on groundwater recharge for the Flanders-Brussels region is assessed. The change trajectory analysis successfully assigned a rational trajectory to 99% of all pixels. The methodology is shown to be powerful in correcting interpretation inconsistencies and overestimation errors in CORINE land cover maps. The overall kappa (cell-by-cell map comparison) improved from 0.6 to 0.8 and from 0.2 to 0.7 for forest and pasture land use classes respectively. The study shows that the inconsistencies in the land use maps introduce uncertainty in groundwater recharge estimation in a range of 10-30%. The analysis showed that during the period of 1990-2013 the LULC changes were mainly driven by urban expansion. The results show that the resolution at which the spatial analysis is performed is important; the recharge differences using original and corrected CORINE land cover maps increase considerably with increasing spatial resolution. This study indicates that improving consistency of land use map timeseries is of critical importance for assessing land use change and its environmental impact.
Chithrabhanu, Savithri Moothiringode
2017-01-01
Introduction Fine Needle Aspiration Cytology (FNAC) has a leading role in the assessment of breast lesions. Masood’s Scoring Index (MSI) and its modification (Modified Masood’s scoring index; MMSI) has been proposed to aid in sub-grouping breast lesions and to help in subsequent management. Aim To assess and compare the diagnostic accuracy of MSI and MMSI by subsequent correlation with histopathology. Materials and Methods The study was cross-sectional in nature and was conducted in a tertiary care setting. The study included 207 cases presenting as palpable breast lump, which had undergone FNAC and subsequent excision biopsy for histopathology. Statistical Analysis The cases were grouped into four categories as suggested by Masood et al., (MSI) and Nandini et al., (MMSI) and concordance analysis with reference to histopathological diagnosis was done. Results In comparison to MSI, MMSI showed better concordance with histopathological diagnosis and superior diagnostic accuracy in non-proliferative breast disease category (p-value = 0.046) as well as in proliferative breast disease without atypia category. The overall diagnostic accuracy of the cytological scoring was 97.5%, with 94.5% sensitivity and 100% specificity. Conclusion Though both MSI and MMSI were found effective in subcategorizing breast lesions, MMSI was found to have better concordance with histopathology. Inclusion of cellular pattern and background material may further help in increasing the accuracy. PMID:28571141
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.
2017-05-01
The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.
NASA Technical Reports Server (NTRS)
Ansar, Adnan; Cheng, Yang
2005-01-01
A pinpoint landing capability will be a critical component for many planned NASA missions to Mars and beyond. Implicit in the requirement is the ability to accurately localize the spacecraft with respect to the terrain during descent. In this paper, we present evidence that a vision-based solution using craters as landmarks is both practical and will meet the requirements of next generation missions. Our emphasis in this paper is on the feasibility of such a system in terms of (a) localization accuracy and (b) applicability to Martian terrain. We show that accuracy of well under 100 meters can be expected under suitable conditions. We also present a sensitivity analysis that makes an explicit connection between input data and robustness of our pose estimate. In addition, we present an analysis of the susceptibility of our technique to inherently ambiguous configurations of craters. We show that probability of failure due to such ambiguity is becoming increasingly small.
NASA Astrophysics Data System (ADS)
Bangs, Corey F.; Kruse, Fred A.; Olsen, Chris R.
2013-05-01
Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the "Urban" class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the "Dirt Path" class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.
NASA Technical Reports Server (NTRS)
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
Using Meta-Analysis to Inform the Design of Subsequent Studies of Diagnostic Test Accuracy
ERIC Educational Resources Information Center
Hinchliffe, Sally R.; Crowther, Michael J.; Phillips, Robert S.; Sutton, Alex J.
2013-01-01
An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial…
Comparing Pixel- and Object-Based Approaches in Effectively Classifying Wetland-Dominated Landscapes
Berhane, Tedros M.; Lane, Charles R.; Wu, Qiusheng; Anenkhonov, Oleg A.; Chepinoga, Victor V.; Autrey, Bradley C.; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km2) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar’s chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection—which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes. PMID:29707381
Berhane, Tedros M; Lane, Charles R; Wu, Qiusheng; Anenkhonov, Oleg A; Chepinoga, Victor V; Autrey, Bradley C; Liu, Hongxing
2018-01-01
Wetland ecosystems straddle both terrestrial and aquatic habitats, performing many ecological functions directly and indirectly benefitting humans. However, global wetland losses are substantial. Satellite remote sensing and classification informs wise wetland management and monitoring. Both pixel- and object-based classification approaches using parametric and non-parametric algorithms may be effectively used in describing wetland structure and habitat, but which approach should one select? We conducted both pixel- and object-based image analyses (OBIA) using parametric (Iterative Self-Organizing Data Analysis Technique, ISODATA, and maximum likelihood, ML) and non-parametric (random forest, RF) approaches in the Barguzin Valley, a large wetland (~500 km 2 ) in the Lake Baikal, Russia, drainage basin. Four Quickbird multispectral bands plus various spatial and spectral metrics (e.g., texture, Non-Differentiated Vegetation Index, slope, aspect, etc.) were analyzed using field-based regions of interest sampled to characterize an initial 18 ISODATA-based classes. Parsimoniously using a three-layer stack (Quickbird band 3, water ratio index (WRI), and mean texture) in the analyses resulted in the highest accuracy, 87.9% with pixel-based RF, followed by OBIA RF (segmentation scale 5, 84.6% overall accuracy), followed by pixel-based ML (83.9% overall accuracy). Increasing the predictors from three to five by adding Quickbird bands 2 and 4 decreased the pixel-based overall accuracy while increasing the OBIA RF accuracy to 90.4%. However, McNemar's chi-square test confirmed no statistically significant difference in overall accuracy among the classifiers (pixel-based ML, RF, or object-based RF) for either the three- or five-layer analyses. Although potentially useful in some circumstances, the OBIA approach requires substantial resources and user input (such as segmentation scale selection-which was found to substantially affect overall accuracy). Hence, we conclude that pixel-based RF approaches are likely satisfactory for classifying wetland-dominated landscapes.
Relationship between resolution and accuracy of four intraoral scanners in complete-arch impressions
Pascual-Moscardó, Agustín; Camps, Isabel
2018-01-01
Background The scanner does not measure the dental surface continually. Instead, it generates a point cloud, and these points are then joined to form the scanned object. This approximation will depend on the number of points generated (resolution), which can lead to low accuracy (trueness and precision) when fewer points are obtained. The purpose of this study is to determine the resolution of four intraoral digital imaging systems and to demonstrate the relationship between accuracy and resolution of the intraoral scanner in impressions of a complete dental arch. Material and Methods A master cast of the complete maxillary arch was prepared with different dental preparations. Using four digital impression systems, the cast was scanned inside of a black methacrylate box, obtaining a total of 40 digital impressions from each scanner. The resolution was obtained by dividing the number of points of each digital impression by the total surface area of the cast. Accuracy was evaluated using a three-dimensional measurement software, using the “best alignment” method of the casts with a highly faithful reference model obtained from an industrial scanner. Pearson correlation was used for statistical analysis of the data. Results Of the intraoral scanners, Omnicam is the system with the best resolution, with 79.82 points per mm2, followed by True Definition with 54.68 points per mm2, Trios with 41.21 points per mm2, and iTero with 34.20 points per mm2. However, the study found no relationship between resolution and accuracy of the study digital impression systems (P >0.05), except for Omnicam and its precision. Conclusions The resolution of the digital impression systems has no relationship with the accuracy they achieve in the impression of a complete dental arch. The study found that the Omnicam scanner is the system that obtains the best resolution, and that as the resolution increases, its precision increases. Key words:Trueness, precision, accuracy, resolution, intraoral scanner, digital impression. PMID:29750097
Salem, Rany M; Wessel, Jennifer; Schork, Nicholas J
2005-03-01
Interest in the assignment and frequency analysis of haplotypes in samples of unrelated individuals has increased immeasurably as a result of the emphasis placed on haplotype analyses by, for example, the International HapMap Project and related initiatives. Although there are many available computer programs for haplotype analysis applicable to samples of unrelated individuals, many of these programs have limitations and/or very specific uses. In this paper, the key features of available haplotype analysis software for use with unrelated individuals, as well as pooled DNA samples from unrelated individuals, are summarised. Programs for haplotype analysis were identified through keyword searches on PUBMED and various internet search engines, a review of citations from retrieved papers and personal communications, up to June 2004. Priority was given to functioning computer programs, rather than theoretical models and methods. The available software was considered in light of a number of factors: the algorithm(s) used, algorithm accuracy, assumptions, the accommodation of genotyping error, implementation of hypothesis testing, handling of missing data, software characteristics and web-based implementations. Review papers comparing specific methods and programs are also summarised. Forty-six haplotyping programs were identified and reviewed. The programs were divided into two groups: those designed for individual genotype data (a total of 43 programs) and those designed for use with pooled DNA samples (a total of three programs). The accuracy of programs using various criteria are assessed and the programs are categorised and discussed in light of: algorithm and method, accuracy, assumptions, genotyping error, hypothesis testing, missing data, software characteristics and web implementation. Many available programs have limitations (eg some cannot accommodate missing data) and/or are designed with specific tasks in mind (eg estimating haplotype frequencies rather than assigning most likely haplotypes to individuals). It is concluded that the selection of an appropriate haplotyping program for analysis purposes should be guided by what is known about the accuracy of estimation, as well as by the limitations and assumptions built into a program.
Endogenous synchronous fluorescence spectroscopy (SFS) of basal cell carcinoma-initial study
NASA Astrophysics Data System (ADS)
Borisova, E.; Zhelyazkova, Al.; Keremedchiev, M.; Penkov, N.; Semyachkina-Glushkovskaya, O.; Avramov, L.
2016-01-01
The human skin is a complex, multilayered and inhomogeneous organ with spatially varying optical properties. Analysis of cutaneous fluorescence spectra could be a very complicated task; therefore researchers apply complex mathematical tools for data evaluation, or try to find some specific approaches, that would simplify the spectral analysis. Synchronous fluorescence spectroscopy (SFS) allows improving the spectral resolution, which could be useful for the biological tissue fluorescence characterization and could increase the tumour detection diagnostic accuracy.
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
Task experience and children’s working memory performance: A perspective from recall timing
Towse, John N.; Cowan, Nelson; Horton, Neil J.; Whytock, Shealagh
2008-01-01
Working memory is an important theoretical construct among children, and measures of its capacity predict a range of cognitive skills and abilities. Data from 9-and 11-year-old children illustrate how a chronometric analysis of recall can complement and elaborate recall accuracy in advancing our understanding of working memory. A reading span task was completed by 130 children, 75 of whom were tested on two occasions, with sequence length either increasing or decreasing during test administration. Substantial pauses occur during participants’ recall sequences and they represent consistent performance traits over time, whilst also varying with recall circumstances and task history. Recall pauses help to predict reading and number skills, alongside as well as separate from levels of recall accuracy. The task demands of working memory change as a function of task experience, with a combination of accuracy and response timing in novel task situations being the strongest predictor of cognitive attainment. PMID:18473637
Automated computation of autonomous spectral submanifolds for nonlinear modal analysis
NASA Astrophysics Data System (ADS)
Ponsioen, Sten; Pedergnana, Tiemo; Haller, George
2018-04-01
We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.
Reducing trial length in force platform posturographic sleep deprivation measurements
NASA Astrophysics Data System (ADS)
Forsman, P.; Hæggström, E.; Wallin, A.
2007-09-01
Sleepiness correlates with sleep-related accidents, but convenient tests for sleepiness monitoring are scarce. The posturographic test is a method to assess balance, and this paper describes one phase of the development of a posturographic sleepiness monitoring method. We investigated the relationship between trial length and accuracy of the posturographic time-awake (TA) estimate. Twenty-one healthy adults were kept awake for 32 h and their balance was recorded, 16 times with 30 s trials, as a function of TA. The balance was analysed with regards to fractal dimension, most common sway amplitude and time interval for open-loop stance control. While a 30 s trial allows estimating the TA of individual subjects with better than 5 h accuracy, repeating the analysis using shorter trial lengths showed that 18 s sufficed to achieve the targeted 5 h accuracy. Moreover, it was found that with increasing TA, the posturographic parameters estimated the subjects' TA more accurately.
The counting of native blood cells by digital microscopy
NASA Astrophysics Data System (ADS)
Torbin, S. O.; Doubrovski, V. A.; Zabenkov, I. V.; Tsareva, O. E.
2017-03-01
An algorithm for photographic images processing of blood samples in its native state was developed to determine the concentration of erythrocytes, leukocytes and platelets without individual separate preparation of cells' samples. Special "photo templates" were suggested to use in order to identify red blood cells. The effect of "highlighting" of leukocytes, which was found by authors, was used to increase the accuracy of this type of cells counting. Finally to raise the resolution of platelets from leukocytes the areas of their photo images were used, but not their sizes. It is shown that the accuracy of cells counting for native blood samples may be comparable with the accuracy of similar studies for smears. At the same time the proposed native blood analysis simplifies greatly the procedure of sample preparation in comparison to smear, permits to move from the detection of blood cells ratio to the determination of their concentrations in the sample.
Schoemans, H; Goris, K; Durm, R V; Vanhoof, J; Wolff, D; Greinix, H; Pavletic, S; Lee, S J; Maertens, J; Geest, S D; Dobbels, F; Duarte, R F
2016-08-01
The EBMT Complications and Quality of Life Working Party has developed a computer-based algorithm, the 'eGVHD App', using a user-centered design process. Accuracy was tested using a quasi-experimental crossover design with four expert-reviewed case vignettes in a convenience sample of 28 clinical professionals. Perceived usefulness was evaluated by the technology acceptance model (TAM) and User satisfaction by the Post-Study System Usability Questionnaire (PSSUQ). User experience was positive, with a median of 6 TAM points (interquartile range: 1) and beneficial median total, and subscale PSSUQ scores. The initial standard practice assessment of the vignettes yielded 65% correct results for diagnosis and 45% for scoring. The 'eGVHD App' significantly increased diagnostic and scoring accuracy to 93% (+28%) and 88% (+43%), respectively (both P<0.05). The same trend was observed in the repeated analysis of case 2: accuracy improved by using the App (+31% for diagnosis and +39% for scoring), whereas performance tended to decrease once the App was taken away. The 'eGVHD App' could dramatically improve the quality of care and research as it increased the performance of the whole user group by about 30% at the first assessment and showed a trend for improvement of individual performance on repeated case evaluation.
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Monte Carlo treatment of resonance-radiation imprisonment in fluorescent lamps—revisited
NASA Astrophysics Data System (ADS)
Anderson, James B.
2016-12-01
We reported in 1985 a Monte Carlo treatment of the imprisonment of the 253.7 nm resonance radiation from mercury in the mercury-argon discharge of fluorescent lamps. The calculated spectra of the emitted radiation were found in good agreement with measured spectra. The addition of the isotope mercury-196 to natural mercury was found, also in agreement with experiments, to increase lamp efficiency. In this paper we report the extension of the earlier work with increased accuracy, analysis of photon exit-time distributions, recycling of energy released in quenching, analysis of dynamic similarity for different lamp sizes, variation of Mrozowski transfer rates, prediction and analysis of the hyperfine ultra-violet spectra, and optimization of tailored mercury isotope mixtures for increased lamp efficiency. The spectra were found insensitive to the extent of quenching and recycling. The optimized mixtures were found to increase efficiencies by as much as 5% for several lamp configurations. Optimization without increasing the mercury-196 fraction was found to increase efficiencies by nearly 1% for several configurations.
Ralston, Barbara E.; Davis, Philip A.; Weber, Robert M.; Rundall, Jill M.
2008-01-01
A vegetation database of the riparian vegetation located within the Colorado River ecosystem (CRE), a subsection of the Colorado River between Glen Canyon Dam and the western boundary of Grand Canyon National Park, was constructed using four-band image mosaics acquired in May 2002. A digital line scanner was flown over the Colorado River corridor in Arizona by ISTAR Americas, using a Leica ADS-40 digital camera to acquire a digital surface model and four-band image mosaics (blue, green, red, and near-infrared) for vegetation mapping. The primary objective of this mapping project was to develop a digital inventory map of vegetation to enable patch- and landscape-scale change detection, and to establish randomized sampling points for ground surveys of terrestrial fauna (principally, but not exclusively, birds). The vegetation base map was constructed through a combination of ground surveys to identify vegetation classes, image processing, and automated supervised classification procedures. Analysis of the imagery and subsequent supervised classification involved multiple steps to evaluate band quality, band ratios, and vegetation texture and density. Identification of vegetation classes involved collection of cover data throughout the river corridor and subsequent analysis using two-way indicator species analysis (TWINSPAN). Vegetation was classified into six vegetation classes, following the National Vegetation Classification Standard, based on cover dominance. This analysis indicated that total area covered by all vegetation within the CRE was 3,346 ha. Considering the six vegetation classes, the sparse shrub (SS) class accounted for the greatest amount of vegetation (627 ha) followed by Pluchea (PLSE) and Tamarix (TARA) at 494 and 366 ha, respectively. The wetland (WTLD) and Prosopis-Acacia (PRGL) classes both had similar areal cover values (227 and 213 ha, respectively). Baccharis-Salix (BAXX) was the least represented at 94 ha. Accuracy assessment of the supervised classification determined that accuracies varied among vegetation classes from 90% to 49%. Causes for low accuracies were similar spectral signatures among vegetation classes. Fuzzy accuracy assessment improved classification accuracies such that Federal mapping standards of 80% accuracies for all classes were met. The scale used to quantify vegetation adequately meets the needs of the stakeholder group. Increasing the scale to meet the U.S. Geological Survey (USGS)-National Park Service (NPS)National Mapping Program's minimum mapping unit of 0.5 ha is unwarranted because this scale would reduce the resolution of some classes (e.g., seep willow/coyote willow would likely be combined with tamarisk). While this would undoubtedly improve classification accuracies, it would not provide the community-level information about vegetation change that would benefit stakeholders. The identification of vegetation classes should follow NPS mapping approaches to complement the national effort and should incorporate the alternative analysis for community identification that is being incorporated into newer NPS mapping efforts. National Vegetation Classification is followed in this report for association- to formation-level categories. Accuracies could be improved by including more environmental variables such as stage elevation in the classification process and incorporating object-based classification methods. Another approach that may address the heterogeneous species issue and classification is to use spectral mixing analysis to estimate the fractional cover of species within each pixel and better quantify the cover of individual species that compose a cover class. Varying flights to capture vegetation at different times of the year might also help separate some vegetation classes, though the cost may be prohibitive. Lastly, photointerpretation instead of automated mapping could be tried. Photointerpretation would likely not improve accuracies in this case, howev
NASA Astrophysics Data System (ADS)
Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Homayouni, S.
2016-06-01
Polarimetric Synthetic Aperture Radar (PolSAR) imagery is a complex multi-dimensional dataset, which is an important source of information for various natural resources and environmental classification and monitoring applications. PolSAR imagery produces valuable information by observing scattering mechanisms from different natural and man-made objects. Land cover mapping using PolSAR data classification is one of the most important applications of SAR remote sensing earth observations, which have gained increasing attention in the recent years. However, one of the most challenging aspects of classification is selecting features with maximum discrimination capability. To address this challenge, a statistical approach based on the Fisher Linear Discriminant Analysis (FLDA) and the incorporation of physical interpretation of PolSAR data into classification is proposed in this paper. After pre-processing of PolSAR data, including the speckle reduction, the H/α classification is used in order to classify the basic scattering mechanisms. Then, a new method for feature weighting, based on the fusion of FLDA and physical interpretation, is implemented. This method proves to increase the classification accuracy as well as increasing between-class discrimination in the final Wishart classification. The proposed method was applied to a full polarimetric C-band RADARSAT-2 data set from Avalon area, Newfoundland and Labrador, Canada. This imagery has been acquired in June 2015, and covers various types of wetlands including bogs, fens, marshes and shallow water. The results were compared with the standard Wishart classification, and an improvement of about 20% was achieved in the overall accuracy. This method provides an opportunity for operational wetland classification in northern latitude with high accuracy using only SAR polarimetric data.
Slower Perception Followed by Faster Lexical Decision in Longer Words: A Diffusion Model Analysis
Oganian, Yulia; Froehlich, Eva; Schlickeiser, Ulrike; Hofmann, Markus J.; Heekeren, Hauke R.; Jacobs, Arthur M.
2016-01-01
Effects of stimulus length on reaction times (RTs) in the lexical decision task are the topic of extensive research. While slower RTs are consistently found for longer pseudo-words, a finding coined the word length effect (WLE), some studies found no effects for words, and yet others reported faster RTs for longer words. Moreover, the WLE depends on the orthographic transparency of a language, with larger effects in more transparent orthographies. Here we investigate processes underlying the WLE in lexical decision in German-English bilinguals using a diffusion model (DM) analysis, which we compared to a linear regression approach. In the DM analysis, RT-accuracy distributions are characterized using parameters that reflect latent sub-processes, in particular evidence accumulation and decision-independent perceptual encoding, instead of typical parameters such as mean RT and accuracy. The regression approach showed a decrease in RTs with length for pseudo-words, but no length effect for words. However, DM analysis revealed that the null effect for words resulted from opposing effects of length on perceptual encoding and rate of evidence accumulation. Perceptual encoding times increased with length for words and pseudo-words, whereas the rate of evidence accumulation increased with length for real words but decreased for pseudo-words. A comparison between DM parameters in German and English suggested that orthographic transparency affects perceptual encoding, whereas effects of length on evidence accumulation are likely to reflect contextual information and the increase in available perceptual evidence with length. These opposing effects may account for the inconsistent findings on WLEs. PMID:26779075
Slower Perception Followed by Faster Lexical Decision in Longer Words: A Diffusion Model Analysis.
Oganian, Yulia; Froehlich, Eva; Schlickeiser, Ulrike; Hofmann, Markus J; Heekeren, Hauke R; Jacobs, Arthur M
2015-01-01
Effects of stimulus length on reaction times (RTs) in the lexical decision task are the topic of extensive research. While slower RTs are consistently found for longer pseudo-words, a finding coined the word length effect (WLE), some studies found no effects for words, and yet others reported faster RTs for longer words. Moreover, the WLE depends on the orthographic transparency of a language, with larger effects in more transparent orthographies. Here we investigate processes underlying the WLE in lexical decision in German-English bilinguals using a diffusion model (DM) analysis, which we compared to a linear regression approach. In the DM analysis, RT-accuracy distributions are characterized using parameters that reflect latent sub-processes, in particular evidence accumulation and decision-independent perceptual encoding, instead of typical parameters such as mean RT and accuracy. The regression approach showed a decrease in RTs with length for pseudo-words, but no length effect for words. However, DM analysis revealed that the null effect for words resulted from opposing effects of length on perceptual encoding and rate of evidence accumulation. Perceptual encoding times increased with length for words and pseudo-words, whereas the rate of evidence accumulation increased with length for real words but decreased for pseudo-words. A comparison between DM parameters in German and English suggested that orthographic transparency affects perceptual encoding, whereas effects of length on evidence accumulation are likely to reflect contextual information and the increase in available perceptual evidence with length. These opposing effects may account for the inconsistent findings on WLEs.
Cluster signal-to-noise analysis for evaluation of the information content in an image.
Weerawanich, Warangkana; Shimizu, Mayumi; Takeshita, Yohei; Okamura, Kazutoshi; Yoshida, Shoko; Yoshiura, Kazunori
2018-01-01
(1) To develop an observer-free method of analysing image quality related to the observer performance in the detection task and (2) to analyse observer behaviour patterns in the detection of small mass changes in cone-beam CT images. 13 observers detected holes in a Teflon phantom in cone-beam CT images. Using the same images, we developed a new method, cluster signal-to-noise analysis, to detect the holes by applying various cut-off values using ImageJ and reconstructing cluster signal-to-noise curves. We then evaluated the correlation between cluster signal-to-noise analysis and the observer performance test. We measured the background noise in each image to evaluate the relationship with false positive rates (FPRs) of the observers. Correlations between mean FPRs and intra- and interobserver variations were also evaluated. Moreover, we calculated true positive rates (TPRs) and accuracies from background noise and evaluated their correlations with TPRs from observers. Cluster signal-to-noise curves were derived in cluster signal-to-noise analysis. They yield the detection of signals (true holes) related to noise (false holes). This method correlated highly with the observer performance test (R 2 = 0.9296). In noisy images, increasing background noise resulted in higher FPRs and larger intra- and interobserver variations. TPRs and accuracies calculated from background noise had high correlation with actual TPRs from observers; R 2 was 0.9244 and 0.9338, respectively. Cluster signal-to-noise analysis can simulate the detection performance of observers and thus replace the observer performance test in the evaluation of image quality. Erroneous decision-making increased with increasing background noise.
Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis
NASA Astrophysics Data System (ADS)
Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen
2016-05-01
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.
2011-01-01
The speed–accuracy trade-off (SAT) is ubiquitous in decision tasks. While the neural mechanisms underlying decisions are generally well characterized, the application of decision-theoretic methods to the SAT has been difficult to reconcile with experimental data suggesting that decision thresholds are inflexible. Using a network model of a cortical decision circuit, we demonstrate the SAT in a manner consistent with neural and behavioral data and with mathematical models that optimize speed and accuracy with respect to one another. In simulations of a reaction time task, we modulate the gain of the network with a signal encoding the urgency to respond. As the urgency signal builds up, the network progresses through a series of processing stages supporting noise filtering, integration of evidence, amplification of integrated evidence, and choice selection. Analysis of the network's dynamics formally characterizes this progression. Slower buildup of urgency increases accuracy by slowing down the progression. Faster buildup has the opposite effect. Because the network always progresses through the same stages, decision-selective firing rates are stereotyped at decision time. PMID:21415911
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C
2011-01-01
The speed-accuracy trade-off (SAT) is ubiquitous in decision tasks. While the neural mechanisms underlying decisions are generally well characterized, the application of decision-theoretic methods to the SAT has been difficult to reconcile with experimental data suggesting that decision thresholds are inflexible. Using a network model of a cortical decision circuit, we demonstrate the SAT in a manner consistent with neural and behavioral data and with mathematical models that optimize speed and accuracy with respect to one another. In simulations of a reaction time task, we modulate the gain of the network with a signal encoding the urgency to respond. As the urgency signal builds up, the network progresses through a series of processing stages supporting noise filtering, integration of evidence, amplification of integrated evidence, and choice selection. Analysis of the network's dynamics formally characterizes this progression. Slower buildup of urgency increases accuracy by slowing down the progression. Faster buildup has the opposite effect. Because the network always progresses through the same stages, decision-selective firing rates are stereotyped at decision time.
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.
Shape accuracy optimization for cable-rib tension deployable antenna structure with tensioned cables
NASA Astrophysics Data System (ADS)
Liu, Ruiwei; Guo, Hongwei; Liu, Rongqiang; Wang, Hongxiang; Tang, Dewei; Song, Xiaoke
2017-11-01
Shape accuracy is of substantial importance in deployable structures as the demand for large-scale deployable structures in various fields, especially in aerospace engineering, increases. The main purpose of this paper is to present a shape accuracy optimization method to find the optimal pretensions for the desired shape of cable-rib tension deployable antenna structure with tensioned cables. First, an analysis model of the deployable structure is established by using finite element method. In this model, geometrical nonlinearity is considered for the cable element and beam element. Flexible deformations of the deployable structure under the action of cable network and tensioned cables are subsequently analyzed separately. Moreover, the influence of pretension of tensioned cables on natural frequencies is studied. Based on the results, a genetic algorithm is used to find a set of reasonable pretension and thus minimize structural deformation under the first natural frequency constraint. Finally, numerical simulations are presented to analyze the deployable structure under two kinds of constraints. Results show that the shape accuracy and natural frequencies of deployable structure can be effectively improved by pretension optimization.
Bin recycling strategy for improving the histogram precision on GPU
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Rodríguez-Vázquez, Juan José; Vega-Rodríguez, Miguel A.
2016-07-01
Histogram is an easily comprehensible way to present data and analyses. In the current scientific context with access to large volumes of data, the processing time for building histogram has dramatically increased. For this reason, parallel construction is necessary to alleviate the impact of the processing time in the analysis activities. In this scenario, GPU computing is becoming widely used for reducing until affordable levels the processing time of histogram construction. Associated to the increment of the processing time, the implementations are stressed on the bin-count accuracy. Accuracy aspects due to the particularities of the implementations are not usually taken into consideration when building histogram with very large data sets. In this work, a bin recycling strategy to create an accuracy-aware implementation for building histogram on GPU is presented. In order to evaluate the approach, this strategy was applied to the computation of the three-point angular correlation function, which is a relevant function in Cosmology for the study of the Large Scale Structure of Universe. As a consequence of the study a high-accuracy implementation for histogram construction on GPU is proposed.
Fischer, S P; Fox, J M; Del Pizzo, W; Friedman, M J; Snyder, S J; Ferkel, R D
1991-01-01
Magnetic resonance images of the knee were made for 1014 patients, and the diagnosis was subsequently confirmed arthroscopically. The accuracy of the diagnoses from the imaging was 89 per cent for the medial meniscus, 88 per cent for the lateral meniscus, 93 per cent for the anterior cruciate ligament, and 99 per cent for the posterior cruciate ligament. The magnetic resonance examinations were done at several centers, and the results varied substantially among centers. The accuracy ranged from 64 to 95 per cent for the medial meniscus, from 83 to 94 per cent for the lateral meniscus, and from 78 to 97 per cent for the anterior cruciate ligament. The results from different magnetic-resonance units were also compared, and the findings suggested increased accuracy for the units that had a stronger magnetic field. Of the menisci for which the magnetic resonance signal was reported to be Grade II (a linear intrameniscal signal not extending to the superior or inferior meniscal surface), 17 per cent were found to be torn at arthroscopy.
Turnlund, Judith R; Keyes, William R
2002-09-01
Stable isotopes are used with increasing frequency to trace the metabolic fate of minerals in human nutrition studies. The precision of the analytical methods used must be sufficient to permit reliable measurement of low enrichments and the accuracy should permit comparisons between studies. Two methods most frequently used today are thermal ionization mass spectrometry (TIMS) and inductively coupled plasma mass spectrometry (ICP-MS). This study was conducted to compare the two methods. Multiple natural samples of copper, zinc, molybdenum, and magnesium were analyzed by both methods to compare their internal and external precision. Samples with a range of isotopic enrichments that were collected from human studies or prepared from standards were analyzed to compare their accuracy. TIMS was more precise and accurate than ICP-MS. However, the cost, ease, and speed of analysis were better for ICP-MS. Therefore, for most purposes, ICP-MS is the method of choice, but when the highest degrees of precision and accuracy are required and when enrichments are very low, TIMS is the method of choice.
Processing of high-precision ceramic balls with a spiral V-groove plate
NASA Astrophysics Data System (ADS)
Feng, Ming; Wu, Yongbo; Yuan, Julong; Ping, Zhao
2017-03-01
As the demand for high-performance bearings gradually increases, ceramic balls with excellent properties, such as high accuracy, high reliability, and high chemical durability used, are extensively used for highperformance bearings. In this study, a spiral V-groove plate method is employed in processing high-precision ceramic balls. After the kinematic analysis of the ball-spin angle and enveloped lapping trajectories, an experimental rig is constructed and experiments are conducted to confirm the feasibility of this method. Kinematic analysis results indicate that the method not only allows for the control of the ball-spin angle but also uniformly distributes the enveloped lapping trajectories over the entire ball surface. Experimental results demonstrate that the novel spiral Vgroove plate method performs better than the conventional concentric V-groove plate method in terms of roundness, surface roughness, diameter difference, and diameter decrease rate. Ceramic balls with a G3-level accuracy are achieved, and their typical roundness, minimum surface roughness, and diameter difference are 0.05, 0.0045, and 0.105 μm, respectively. These findings confirm that the proposed method can be applied to high-accuracy and high-consistency ceramic ball processing.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M
2015-10-01
To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.
Upgrade Summer Severe Weather Tool
NASA Technical Reports Server (NTRS)
Watson, Leela
2011-01-01
The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.
A single camera roentgen stereophotogrammetry method for static displacement analysis.
Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G
2000-06-01
A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.
Anzalone, Nicoletta; Castellano, Antonella; Cadioli, Marcello; Conte, Gian Marco; Cuccarini, Valeria; Bizzi, Alberto; Grimaldi, Marco; Costa, Antonella; Grillea, Giovanni; Vitali, Paolo; Aquino, Domenico; Terreni, Maria Rosa; Torri, Valter; Erickson, Bradley J; Caulo, Massimo
2018-06-01
Purpose To evaluate the feasibility of a standardized protocol for acquisition and analysis of dynamic contrast material-enhanced (DCE) and dynamic susceptibility contrast (DSC) magnetic resonance (MR) imaging in a multicenter clinical setting and to verify its accuracy in predicting glioma grade according to the new World Health Organization 2016 classification. Materials and Methods The local research ethics committees of all centers approved the study, and informed consent was obtained from patients. One hundred patients with glioma were prospectively examined at 3.0 T in seven centers that performed the same preoperative MR imaging protocol, including DCE and DSC sequences. Two independent readers identified the perfusion hotspots on maps of volume transfer constant (K trans ), plasma (v p ) and extravascular-extracellular space (v e ) volumes, initial area under the concentration curve, and relative cerebral blood volume (rCBV). Differences in parameters between grades and molecular subtypes were assessed by using Kruskal-Wallis and Mann-Whitney U tests. Diagnostic accuracy was evaluated by using receiver operating characteristic curve analysis. Results The whole protocol was tolerated in all patients. Perfusion maps were successfully obtained in 94 patients. An excellent interreader reproducibility of DSC- and DCE-derived measures was found. Among DCE-derived parameters, v p and v e had the highest accuracy (are under the receiver operating characteristic curve [A z ] = 0.847 and 0.853) for glioma grading. DSC-derived rCBV had the highest accuracy (A z = 0.894), but the difference was not statistically significant (P > .05). Among lower-grade gliomas, a moderate increase in both v p and rCBV was evident in isocitrate dehydrogenase wild-type tumors, although this was not significant (P > .05). Conclusion A standardized multicenter acquisition and analysis protocol of DCE and DSC MR imaging is feasible and highly reproducible. Both techniques showed a comparable, high diagnostic accuracy for grading gliomas. © RSNA, 2018 Online supplemental material is available for this article.
Koffarnus, Mikhail N; Katz, Jonathan L
2011-02-01
Increased signal-detection accuracy on the 5-choice serial reaction time (5-CSRT) task has been shown with drugs that are useful clinically in treating attention deficit hyperactivity disorder (ADHD), but these increases are often small and/or unreliable. By reducing the reinforcer frequency, it may be possible to increase the sensitivity of this task to pharmacologically induced improvements in accuracy. Rats were trained to respond on the 5-CSRT task on a fixed ratio (FR) 1, FR 3, or FR 10 schedule of reinforcement. Drugs that were and were not expected to enhance performance were then administered before experimental sessions. Significant increases in accuracy of signal detection were not typically obtained under the FR 1 schedule with any drug. However, d-amphetamine, methylphenidate, and nicotine typically increased accuracy under the FR 3 and FR 10 schedules. Increasing the FR requirement in the 5-CSRT task increases the likelihood of a positive result with clinically effective drugs, and may more closely resemble conditions in children with attention deficits.
A comparative analysis of mail and internet surveys
Benjamin D. Poole; David K. Loomis
2010-01-01
Th e field of survey research is constantly evolving with the introduction of new technologies. Each new mini-revolution brings criticism about the accuracy of the new survey method. The latest development in the survey research field has been increased reliance on Internet surveys. This paper compares data collected through a mixed-mode (mail and Internet) survey of...
The sea state bias in altimeter estimates of sea level from collinear analysis of TOPEX data
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.
1994-01-01
The wind speed and significant wave height (H(sub 1/3)) dependencies of the sea state bias in altimeter estimates of sea level, expressed in the form (Delta)h(sub SSB) = bH(sub 1/3), are examined from least squares analysis of 21 cycles of collinear TOPEX data. The bias coefficient b is found to increase in magnitude with increasing wind speed up to about 12 m/s and decrease monotonically in magnitude with increasing H(sub 1/3). A parameterization of b as a quadratic function of wind speed only, as in the formation used to produce the TOPEX geophysical data records (GDRs), is significantly better than a parameterization purely in terms of H(sub 1/3). However, a four-parameter combined wind speed and wave height formulation for b (quadratic in wind speed plus linear in H(sub 1/3)) significantly improves the accuracy of the sea state bias correction. The GDR formulation in terms of wind speed only should therefore be expanded to account for a wave height dependence of b. An attempt to quantify the accuracy of the sea state bias correction (Delta)h(sub SSB) concludes that the uncertainty is a disconcertingly large 1% of H(sub 1/3).
Accuracy Analysis of a Wireless Indoor Positioning System Using Geodetic Methods
NASA Astrophysics Data System (ADS)
Wagner, Przemysław; Woźniak, Marek; Odziemczyk, Waldemar; Pakuła, Dariusz
2017-12-01
Ubisense RTLS is one of the Indoor positioning systems using an Ultra Wide Band. AOA and TDOA methods are used as a principle of positioning. The accuracy of positioning depends primarily on the accuracy of determined angles and distance differences. The paper presents the results of accuracy research which includes a theoretical accuracy prediction and a practical test. Theoretical accuracy was calculated for two variants of system components geometry, assuming the parameters declared by the system manufacturer. Total station measurements were taken as a reference during the practical test. The results of the analysis are presented in a graphical form. A sample implementation (MagMaster) developed by Globema is presented in the final part of the paper.
NASA Astrophysics Data System (ADS)
Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.
2014-11-01
This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.
Stenz, Ulrich; Neumann, Ingo
2017-01-01
Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less. PMID:28812998
Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model
NASA Astrophysics Data System (ADS)
Keum, H.; Han, K.; Kim, H.; Ha, C.
2017-12-01
The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various rainfall event scenarios and parameter optimization. Keywords: Parameters Optimization; PEST model; Urban area Acknowledgement This research was supported by a grant (17AWMP-B079625-04) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Baltzer, Pascal Andreas Thomas; Renz, Diane M; Kullnig, Petra E; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A
2009-04-01
The identification of the most suspect enhancing part of a lesion is regarded as a major diagnostic criterion in dynamic magnetic resonance mammography. Computer-aided diagnosis (CAD) software allows the semi-automatic analysis of the kinetic characteristics of complete enhancing lesions, providing additional information about lesion vasculature. The diagnostic value of this information has not yet been quantified. Consecutive patients from routine diagnostic studies (1.5 T, 0.1 mmol gadopentetate dimeglumine, dynamic gradient-echo sequences at 1-minute intervals) were analyzed prospectively using CAD. Dynamic sequences were processed and reduced to a parametric map. Curve types were classified by initial signal increase (not significant, intermediate, and strong) and the delayed time course of signal intensity (continuous, plateau, and washout). Lesion enhancement was measured using CAD. The most suspect curve, the curve-type distribution percentage, and combined dynamic data were compared. Statistical analysis included logistic regression analysis and receiver-operating characteristic analysis. Fifty-one patients with 46 malignant and 44 benign lesions were enrolled. On receiver-operating characteristic analysis, the most suspect curve showed diagnostic accuracy of 76.7 +/- 5%. In comparison, the curve-type distribution percentage demonstrated accuracy of 80.2 +/- 4.9%. Combined dynamic data had the highest diagnostic accuracy (84.3 +/- 4.2%). These differences did not achieve statistical significance. With appropriate cutoff values, sensitivity and specificity, respectively, were found to be 80.4% and 72.7% for the most suspect curve, 76.1% and 83.6% for the curve-type distribution percentage, and 78.3% and 84.5% for both parameters. The integration of whole-lesion dynamic data tends to improve specificity. However, no statistical significance backs up this finding.
Fu, Yong-Bi
2014-01-01
Genotyping by sequencing (GBS) recently has emerged as a promising genomic approach for assessing genetic diversity on a genome-wide scale. However, concerns are not lacking about the uniquely large unbalance in GBS genotype data. Although some genotype imputation has been proposed to infer missing observations, little is known about the reliability of a genetic diversity analysis of GBS data, with up to 90% of observations missing. Here we performed an empirical assessment of accuracy in genetic diversity analysis of highly incomplete single nucleotide polymorphism genotypes with imputations. Three large single-nucleotide polymorphism genotype data sets for corn, wheat, and rice were acquired, and missing data with up to 90% of missing observations were randomly generated and then imputed for missing genotypes with three map-independent imputation methods. Estimating heterozygosity and inbreeding coefficient from original, missing, and imputed data revealed variable patterns of bias from assessed levels of missingness and genotype imputation, but the estimation biases were smaller for missing data without genotype imputation. The estimates of genetic differentiation were rather robust up to 90% of missing observations but became substantially biased when missing genotypes were imputed. The estimates of topology accuracy for four representative samples of interested groups generally were reduced with increased levels of missing genotypes. Probabilistic principal component analysis based imputation performed better in terms of topology accuracy than those analyses of missing data without genotype imputation. These findings are not only significant for understanding the reliability of the genetic diversity analysis with respect to large missing data and genotype imputation but also are instructive for performing a proper genetic diversity analysis of highly incomplete GBS or other genotype data. PMID:24626289
Dynamic CT myocardial perfusion imaging: performance of 3D semi-automated evaluation software.
Ebersberger, Ullrich; Marcus, Roy P; Schoepf, U Joseph; Lo, Gladys G; Wang, Yining; Blanke, Philipp; Geyer, Lucas L; Gray, J Cranston; McQuiston, Andrew D; Cho, Young Jun; Scheuering, Michael; Canstein, Christian; Nikolaou, Konstantin; Hoffmann, Ellen; Bamberg, Fabian
2014-01-01
To evaluate the performance of three-dimensional semi-automated evaluation software for the assessment of myocardial blood flow (MBF) and blood volume (MBV) at dynamic myocardial perfusion computed tomography (CT). Volume-based software relying on marginal space learning and probabilistic boosting tree-based contour fitting was applied to CT myocardial perfusion imaging data of 37 subjects. In addition, all image data were analysed manually and both approaches were compared with SPECT findings. Study endpoints included time of analysis and conventional measures of diagnostic accuracy. Of 592 analysable segments, 42 showed perfusion defects on SPECT. Average analysis times for the manual and software-based approaches were 49.1 ± 11.2 and 16.5 ± 3.7 min respectively (P < 0.01). There was strong agreement between the two measures of interest (MBF, ICC = 0.91, and MBV, ICC = 0.88, both P < 0.01) and no significant difference in MBF/MBV with respect to diagnostic accuracy between the two approaches for both MBF and MBV for manual versus software-based approach; respectively; all comparisons P > 0.05. Three-dimensional semi-automated evaluation of dynamic myocardial perfusion CT data provides similar measures and diagnostic accuracy to manual evaluation, albeit with substantially reduced analysis times. This capability may aid the integration of this test into clinical workflows. • Myocardial perfusion CT is attractive for comprehensive coronary heart disease assessment. • Traditional image analysis methods are cumbersome and time-consuming. • Automated 3D perfusion software shortens analysis times. • Automated 3D perfusion software increases standardisation of myocardial perfusion CT. • Automated, standardised analysis fosters myocardial perfusion CT integration into clinical practice.
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
Baker, Erich J; Walter, Nicole A R; Salo, Alex; Rivas Perea, Pablo; Moore, Sharon; Gonzales, Steven; Grant, Kathleen A
2017-03-01
The Monkey Alcohol Tissue Research Resource (MATRR) is a repository and analytics platform for detailed data derived from well-documented nonhuman primate (NHP) alcohol self-administration studies. This macaque model has demonstrated categorical drinking norms reflective of human drinking populations, resulting in consumption pattern classifications of very heavy drinking (VHD), heavy drinking (HD), binge drinking (BD), and low drinking (LD) individuals. Here, we expand on previous findings that suggest ethanol drinking patterns during initial drinking to intoxication can reliably predict future drinking category assignment. The classification strategy uses a machine-learning approach to examine an extensive set of daily drinking attributes during 90 sessions of induction across 7 cohorts of 5 to 8 monkeys for a total of 50 animals. A Random Forest classifier is employed to accurately predict categorical drinking after 12 months of self-administration. Predictive outcome accuracy is approximately 78% when classes are aggregated into 2 groups, "LD and BD" and "HD and VHD." A subsequent 2-step classification model distinguishes individual LD and BD categories with 90% accuracy and between HD and VHD categories with 95% accuracy. Average 4-category classification accuracy is 74%, and provides putative distinguishing behavioral characteristics between groupings. We demonstrate that data derived from the induction phase of this ethanol self-administration protocol have significant predictive power for future ethanol consumption patterns. Importantly, numerous predictive factors are longitudinal, measuring the change of drinking patterns through 3 stages of induction. Factors during induction that predict future heavy drinkers include being younger at the time of first intoxication and developing a shorter latency to first ethanol drink. Overall, this analysis identifies predictive characteristics in future very heavy drinkers that optimize intoxication, such as having increasingly fewer bouts with more drinks. This analysis also identifies characteristic avoidance of intoxicating topographies in future low drinkers, such as increasing number of bouts and waiting longer before the first ethanol drink. Copyright © 2017 The Authors Alcoholism: Clinical & Experimental Research published by Wiley Periodicals, Inc. on behalf of Research Society on Alcoholism.
Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C
2018-04-03
High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.
Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel
2017-04-01
Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.
Analysis of remote sensing data for evaluating vegetation resources
NASA Technical Reports Server (NTRS)
1971-01-01
Increased utilization studies for current remote sensor and analysis capabilities included: (1) a review of testing procedures for quantifying the accuracy of photointerpretation; (2) field tests of a fully portable spectral data gathering system, both on the ground and from a helicopter; and (3) a comparison of three methods for obtaining ground information necessary for regional agricultural inventories. A version of the LARS point-by-point classification system was upgraded by the addition of routines to analyze spatial data information.
ERIC Educational Resources Information Center
Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.
2014-01-01
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An analysis and demonstration of clock synchronization by VLBI
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1972-01-01
A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.
Navigator Accuracy Requirements for Prospective Motion Correction
Maclaren, Julian; Speck, Oliver; Stucht, Daniel; Schulze, Peter; Hennig, Jürgen; Zaitsev, Maxim
2010-01-01
Prospective motion correction in MR imaging is becoming increasingly popular to prevent the image artefacts that result from subject motion. Navigator information is used to update the position of the imaging volume before every spin excitation so that lines of acquired k-space data are consistent. Errors in the navigator information, however, result in residual errors in each k-space line. This paper presents an analysis linking noise in the tracking system to the power of the resulting image artefacts. An expression is formulated for the required navigator accuracy based on the properties of the imaged object and the desired resolution. Analytical results are compared with computer simulations and experimental data. PMID:19918892
Predicting grain yield using canopy hyperspectral reflectance in wheat breeding data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; de Los Campos, Gustavo; Alvarado, Gregorio; Suchismita, Mondal; Rutkoski, Jessica; González-Pérez, Lorena; Burgueño, Juan
2017-01-01
Modern agriculture uses hyperspectral cameras to obtain hundreds of reflectance data measured at discrete narrow bands to cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra, depending on the camera. This information is used to construct vegetation indices (VI) (e.g., green normalized difference vegetation index or GNDVI, simple ratio or SRa, etc.) which are used for the prediction of primary traits (e.g., biomass). However, these indices only use some bands and are cultivar-specific; therefore they lose considerable information and are not robust for all cultivars. This study proposes models that use all available bands as predictors to increase prediction accuracy; we compared these approaches with eight conventional vegetation indexes (VIs) constructed using only some bands. The data set we used comes from CIMMYT's global wheat program and comprises 1170 genotypes evaluated for grain yield (ton/ha) in five environments (Drought, Irrigated, EarlyHeat, Melgas and Reduced Irrigated); the reflectance data were measured in 250 discrete narrow bands ranging between 392 and 851 nm. The proposed models for the simultaneous analysis of all the bands were ordinal least square (OLS), Bayes B, principal components with Bayes B, functional B-spline, functional Fourier and functional partial least square. The results of these models were compared with the OLS performed using as predictors each of the eight VIs individually and combined. We found that using all bands simultaneously increased prediction accuracy more than using VI alone. The Splines and Fourier models had the best prediction accuracy for each of the nine time-points under study. Combining image data collected at different time-points led to a small increase in prediction accuracy relative to models that use data from a single time-point. Also, using bands with heritabilities larger than 0.5 only in Drought as predictor variables showed improvements in prediction accuracy.
Fan, Yong; Du, Jin Peng; Liu, Ji Jun; Zhang, Jia Nan; Qiao, Huan Huan; Liu, Shi Chang; Hao, Ding Jun
2018-06-01
A miniature spine-mounted robot has recently been introduced to further improve the accuracy of pedicle screw placement in spine surgery. However, the differences in accuracy between the robotic-assisted (RA) technique and the free-hand with fluoroscopy-guided (FH) method for pedicle screw placement are controversial. A meta-analysis was conducted to focus on this problem. Several randomized controlled trials (RCTs) and cohort studies involving RA and FH and published before January 2017 were searched for using the Cochrane Library, Ovid, Web of Science, PubMed, and EMBASE databases. A total of 55 papers were selected. After the full-text assessment, 45 clinical trials were excluded. The final meta-analysis included 10 articles. The accuracy of pedicle screw placement within the RA group was significantly greater than the accuracy within the FH group (odds ratio 95%, "perfect accuracy" confidence interval: 1.38-2.07, P < .01; odds ratio 95% "clinically acceptable" Confidence Interval: 1.17-2.08, P < .01). There are significant differences in accuracy between RA surgery and FH surgery. It was demonstrated that the RA technique is superior to the conventional method in terms of the accuracy of pedicle screw placement.
Mahshid, Minoo; Saboury, Aboulfazl; Fayaz, Ali; Sadr, Seyed Jalil; Lampert, Friedrich; Mir, Maziar
2012-01-01
Background Mechanical torque devices (MTDs) are one of the most commonly recommended devices used to deliver optimal torque to the screw of dental implants. Recently, high variability has been reported about the accuracy of spring-style mechanical torque devices (S-S MTDs). Joint stability and survival rate of fixed implant supported prosthesis depends on the accuracy of these devices. Currently, there is limited information on the steam sterilization influence on the accuracy of MTDs. The purpose of this study was to assess the effect of steam sterilization on the accuracy (±10% of the target torque) of spring-style mechanical torque devices for dental implants. Materials and methods Fifteen new S-S MTDs and their appropriate drivers from three different manufacturers (Nobel Biocare, Straumann [ITI], and Biomet 3i [3i]) were selected. Peak torque of devices (5 in each subgroup) was measured before and after autoclaving using a Tohnichi torque gauge. Descriptive statistical analysis was used and a repeated-measures ANOVA with type of device as a between-subject comparison was performed to assess the difference in accuracy among the three groups of spring-style mechanical torque devices after sterilization. A Bonferroni post hoc test was used to assess pairwise comparisons. Results Before steam sterilization, all the tested devices stayed within 10% of their target values. After 100 sterilization cycles, results didn’t show any significant difference between raw and absolute error values in the Nobel Biocare and ITI devices; however the results demonstrated an increase of error values in the 3i group (P < 0.05). Raw error values increased with a predictable pattern in 3i devices and showed more than a 10% difference from target torque values (maximum difference of 14% from target torque was seen in 17% of peak torque measurements). Conclusion Within the limitation of this study, steam sterilization did not affect the accuracy (±10% of the target torque) of the Nobel Biocare and ITI MTDs. Raw error values increased with a predictable pattern in 3i devices and showed more than 10% difference from target torque values. Before expanding upon the clinical implications, the controlled and combined effect of aging (frequency of use) and steam sterilization needs more investigation. PMID:23674923
Li, Der-Chiang; Liu, Chiao-Wen; Hu, Susan C
2011-05-01
Medical data sets are usually small and have very high dimensionality. Too many attributes will make the analysis less efficient and will not necessarily increase accuracy, while too few data will decrease the modeling stability. Consequently, the main objective of this study is to extract the optimal subset of features to increase analytical performance when the data set is small. This paper proposes a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, this study applies principal component analysis (PCA) to extract the optimal subset of features. Finally, we use the transformed data with these optimal features as the input data for a learning tool, a support vector machine (SVM). Six medical data sets: Pima Indians' diabetes, Wisconsin diagnostic breast cancer, Parkinson disease, echocardiogram, BUPA liver disorders dataset, and bladder cancer cases in Taiwan, are employed to illustrate the approach presented in this paper. This research uses the t-test to evaluate the classification accuracy for a single data set; and uses the Friedman test to show the proposed method is better than other methods over the multiple data sets. The experiment results indicate that the proposed method has better classification performance than either PCA or kernel principal component analysis (KPCA) when the data set is small, and suggest creating new purpose-related information to improve the analysis performance. This paper has shown that feature extraction is important as a function of feature selection for efficient data analysis. When the data set is small, using the fuzzy-based transformation method presented in this work to increase the information available produces better results than the PCA and KPCA approaches. Copyright © 2011 Elsevier B.V. All rights reserved.
Spatial modeling and classification of corneal shape.
Marsolo, Keith; Twa, Michael; Bullimore, Mark A; Parthasarathy, Srinivasan
2007-03-01
One of the most promising applications of data mining is in biomedical data used in patient diagnosis. Any method of data analysis intended to support the clinical decision-making process should meet several criteria: it should capture clinically relevant features, be computationally feasible, and provide easily interpretable results. In an initial study, we examined the feasibility of using Zernike polynomials to represent biomedical instrument data in conjunction with a decision tree classifier to distinguish between the diseased and non-diseased eyes. Here, we provide a comprehensive follow-up to that work, examining a second representation, pseudo-Zernike polynomials, to determine whether they provide any increase in classification accuracy. We compare the fidelity of both methods using residual root-mean-square (rms) error and evaluate accuracy using several classifiers: neural networks, C4.5 decision trees, Voting Feature Intervals, and Naïve Bayes. We also examine the effect of several meta-learning strategies: boosting, bagging, and Random Forests (RFs). We present results comparing accuracy as it relates to dataset and transformation resolution over a larger, more challenging, multi-class dataset. They show that classification accuracy is similar for both data transformations, but differs by classifier. We find that the Zernike polynomials provide better feature representation than the pseudo-Zernikes and that the decision trees yield the best balance of classification accuracy and interpretability.
Performance Analysis of Ranging Techniques for the KPLO Mission
NASA Astrophysics Data System (ADS)
Park, Sungjoon; Moon, Sangman
2018-03-01
In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.
Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation
NASA Astrophysics Data System (ADS)
Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.
2012-07-01
This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.
Samanci, Yavuz; Karagöz, Yeşim; Yaman, Mehmet; Atçı, İbrahim Burak; Emre, Ufuk; Kılıçkesmez, Nuri Özgür; Çelik, Suat Erol
2016-11-01
To determine the accuracy of median nerve T2 evaluation and its relation with Boston Questionnaire (BQ) and nerve conduction studies (NCSs) in pre-operative and post-operative carpal tunnel syndrome (CTS) patients in comparison with healthy volunteers. Twenty-three CTS patients and 24 healthy volunteers underwent NCSs, median nerve T2 evaluation and self-administered BQ. Pre-operative and 1st year post-operative median nerve T2 values and cross-sectional areas (CSAs) were compared both within pre-operative and post-operative CTS groups, and with healthy volunteers. The relationship between MRI findings and BQ and NCSs was analyzed. The ROC curve analysis was used for determining the accuracy. The comparison of pre-operative and post-operative T2 values and CSAs revealed statistically significant improvements in the post-operative patient group (p<0.001 for all parameters). There were positive correlations between T2 values at all levels and BQ values, and positive and negative correlations were also found regarding T2 values and NCS findings in CTS patients. The receiver operating characteristic curve analysis for defined cut-off levels of median nerve T2 values in hands with severe CTS yielded excellent accuracy at all levels. However, this accuracy could not be demonstrated in hands with mild CTS. This study is the first to analyze T2 values in both pre-operative and post-operative CTS patients. The presence of increased T2 values in CTS patients compared to controls and excellent accuracy in hands with severe CTS indicates T2 signal changes related to CTS pathophysiology and possible utilization of T2 signal evaluation in hands with severe CTS. Copyright © 2016 Elsevier B.V. All rights reserved.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Fencl, Pavel; Belohlavek, Otakar; Harustiak, Tomas; Zemanova, Milada
2016-11-01
The aim of the analysis was to assess the accuracy of various FDG-PET/CT parameters in staging lymph nodes after neoadjuvant chemotherapy. In this prospective study, 74 patients with adenocarcinoma of the esophageal-gastric junction were examined by FDG-PET/CT in the course of their neoadjuvant chemotherapy given before surgical treatment. Data from the final FDG-PET/CT examinations were compared with the histology from the surgical specimens (gold standard). The accuracy was calculated for four FDG-PET/CT parameters: (1) hypermetabolic nodes, (2) large nodes, (3) large-and-medium large nodes, and (4) hypermetabolic or large nodes. In 74 patients, a total of 1540 lymph nodes were obtained by surgery, and these were grouped into 287 regions according to topographic origin. Five hundred and two nodes were imaged by FDG-PET/CT and were grouped into these same regions for comparison. In the analysis, (1) hypermetabolic nodes, (2) large nodes, (3) large-and-medium large nodes, and (4) hypermetabolic or large nodes identified metastases in particular regions with sensitivities of 11.6%, 2.9%, 21.7%, and 13.0%, respectively; specificity was 98.6%, 94.5%, 74.8%, and 93.6%, respectively. The best accuracy of 77.7% reached the parameter of hypermetabolic nodes. Accuracy decreased to 62.0% when also smaller nodes (medium-large) were taken for the parameter of metastases. FDG-PET/CT proved low sensitivity and high specificity. Low sensitivity was based on low detection rate (32.6%) when compared nodes imaged by FDG-PET/CT to nodes found by surgery, and in inability to detect micrometastases. Sensitivity increased when also medium-large LNs were taken for positive, but specificity and accuracy decreased.
[Design and accuracy analysis of upper slicing system of MSCT].
Jiang, Rongjian
2013-05-01
The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.
Movement amplitude and tempo change in piano performance
NASA Astrophysics Data System (ADS)
Palmer, Caroline
2004-05-01
Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.
Analysis of spatial distribution of land cover maps accuracy
NASA Astrophysics Data System (ADS)
Khatami, R.; Mountrakis, G.; Stehman, S. V.
2017-12-01
Land cover maps have become one of the most important products of remote sensing science. However, classification errors will exist in any classified map and affect the reliability of subsequent map usage. Moreover, classification accuracy often varies over different regions of a classified map. These variations of accuracy will affect the reliability of subsequent analyses of different regions based on the classified maps. The traditional approach of map accuracy assessment based on an error matrix does not capture the spatial variation in classification accuracy. Here, per-pixel accuracy prediction methods are proposed based on interpolating accuracy values from a test sample to produce wall-to-wall accuracy maps. Different accuracy prediction methods were developed based on four factors: predictive domain (spatial versus spectral), interpolation function (constant, linear, Gaussian, and logistic), incorporation of class information (interpolating each class separately versus grouping them together), and sample size. Incorporation of spectral domain as explanatory feature spaces of classification accuracy interpolation was done for the first time in this research. Performance of the prediction methods was evaluated using 26 test blocks, with 10 km × 10 km dimensions, dispersed throughout the United States. The performance of the predictions was evaluated using the area under the curve (AUC) of the receiver operating characteristic. Relative to existing accuracy prediction methods, our proposed methods resulted in improvements of AUC of 0.15 or greater. Evaluation of the four factors comprising the accuracy prediction methods demonstrated that: i) interpolations should be done separately for each class instead of grouping all classes together; ii) if an all-classes approach is used, the spectral domain will result in substantially greater AUC than the spatial domain; iii) for the smaller sample size and per-class predictions, the spectral and spatial domain yielded similar AUC; iv) for the larger sample size (i.e., very dense spatial sample) and per-class predictions, the spatial domain yielded larger AUC; v) increasing the sample size improved accuracy predictions with a greater benefit accruing to the spatial domain; and vi) the function used for interpolation had the smallest effect on AUC.
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
Contrast-enhanced spectral mammography improves diagnostic accuracy in the symptomatic setting.
Tennant, S L; James, J J; Cornford, E J; Chen, Y; Burrell, H C; Hamilton, L J; Girio-Fragkoulakis, C
2016-11-01
To assess the diagnostic accuracy of contrast-enhanced spectral mammography (CESM), and gauge its "added value" in the symptomatic setting. A retrospective multi-reader review of 100 consecutive CESM examinations was performed. Anonymised low-energy (LE) images were reviewed and given a score for malignancy. At least 3 weeks later, the entire examination (LE and recombined images) was reviewed. Histopathology data were obtained for all cases. Differences in performance were assessed using receiver operator characteristic (ROC) analysis. Sensitivity, specificity, and lesion size (versus MRI or histopathology) differences were calculated. Seventy-three percent of cases were malignant at final histology, 27% were benign following standard triple assessment. ROC analysis showed improved overall performance of CESM over LE alone, with area under the curve of 0.93 versus 0.83 (p<0.025). CESM showed increased sensitivity (95% versus 84%, p<0.025) and specificity (81% versus 63%, p<0.025) compared to LE alone, with all five readers showing improved accuracy. Tumour size estimation at CESM was significantly more accurate than LE alone, the latter tending to undersize lesions. In 75% of cases, CESM was deemed a useful or significant aid to diagnosis. CESM provides immediately available, clinically useful information in the symptomatic clinic in patients with suspicious palpable abnormalities. Radiologist sensitivity, specificity, and size accuracy for breast cancer detection and staging are all improved using CESM as the primary mammographic investigation. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mannon, Timothy Patrick, Jr.
Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.
ERIC Educational Resources Information Center
Spaniol, Julia; Madden, David J.; Voss, Andreas
2006-01-01
Two experiments investigated adult age differences in episodic and semantic long-term memory tasks, as a test of the hypothesis of specific age-related decline in context memory. Older adults were slower and exhibited lower episodic accuracy than younger adults. Fits of the diffusion model (R. Ratcliff, 1978) revealed age-related increases in…
ERIC Educational Resources Information Center
Grote, Irene; And Others
1996-01-01
Three preschoolers performed four sorts with stimulus cards--an untaught target sort and three directly taught alternating sorts considered to self-instruct the target performance. Accuracy increased first in the skill sorts and then in the untaught target sorts. All subjects generalized to new target sorts. Correct spontaneous self-instructions…
Neural Networks Based Approach to Enhance Space Hardware Reliability
NASA Technical Reports Server (NTRS)
Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.
2011-01-01
This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.
Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.
Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J
2013-06-01
An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Distributed memory parallel Markov random fields using graph partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinemann, C.; Perciano, T.; Ushizima, D.
Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less
A Paramagnetic Molecular Voltmeter
Surek, Jack T.; Thomas, David D.
2008-01-01
We have developed a general electron paramagnetic resonance (EPR) method to measure electrostatic potential at spin labels on proteins to millivolt accuracy. Electrostatic potential is fundamental to energy-transducing proteins like myosin, because molecular energy storage and retrieval is primarily electrostatic. Quantitative analysis of protein electrostatics demands a site-specific spectroscopic method sensitive to millivolt changes. Previous electrostatic potential studies on macromolecules fell short in sensitivity, accuracy and/or specificity. Our approach uses fast-relaxing charged and neutral paramagnetic relaxation agents (PRAs) to increase nitroxide spin label relaxation rate solely through collisional spin exchange. These PRAs were calibrated in experiments on small nitroxides of known structure and charge to account for differences in their relaxation efficiency. Nitroxide longitudinal (R1) and transverse (R2) relaxation rates were separated by applying lineshape analysis to progressive saturation spectra. The ratio of measured R1 increases for each pair of charged and neutral PRAs measures the shift in local PRA concentration due to electrostatic potential. Voltage at the spin label is then calculated using the Boltzmann equation. Measured voltages for two small charged nitroxides agree with Debye-Hückel calculations. Voltage for spin-labeled myosin fragment S1 also agrees with calculation based on the pK shift of the reacted cysteine. PMID:17964835
Investigations of fluid-strain interaction using Plate Boundary Observatory borehole data
NASA Astrophysics Data System (ADS)
Boyd, Jeffrey Michael
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
Vos, J J; Kalmar, A F; Struys, M M R F; Porte, R J; Wietasch, J K G; Scheeren, T W L; Hendriks, H G D
2012-10-01
The Masimo Radical 7 (Masimo Corp., Irvine, CA, USA) pulse co-oximeter(®) calculates haemoglobin concentration (SpHb) non-invasively using transcutaneous spectrophotometry. We compared SpHb with invasive satellite-lab haemoglobin monitoring (Hb(satlab)) during major hepatic resections both under steady-state conditions and in a dynamic phase with fluid administration of crystalloid and colloid solutions. Thirty patients undergoing major hepatic resection were included and randomized to receive a fluid bolus of 15 ml kg(-1) colloid (n=15) or crystalloid (n=15) solution over 30 min. SpHb was continuously measured on the index finger, and venous blood samples were analysed in both the steady-state phase (from induction until completion of parenchymal transection) and the dynamic phase (during fluid bolus). Correlation was significant between SpHb and Hb(satlab) (R(2)=0.50, n=543). The modified Bland-Altman analysis for repeated measurements showed a bias (precision) of -0.27 (1.06) and -0.02 (1.07) g dl(-1) for the steady-state and dynamic phases, respectively. SpHb accuracy increased when Hb(satlab) was <10 g dl(-1), with a bias (precision) of 0.41 (0.47) vs -0.26 (1.12) g dl(-1) for values >10 g dl(-1), but accuracy decreased after colloid administration (R(2)=0.25). SpHb correlated moderately with Hb(satlab) with a slight underestimation in both phases in patients undergoing major hepatic resection. Accuracy increased for lower Hb(satlab) values but decreased in the presence of colloid solution. Further improvements are necessary to improve device accuracy under these conditions, so that SpHb might become a sensitive screening device for clinically significant anaemia.
Kelly, Brendan S; Rainford, Louise A; Darcy, Sarah P; Kavanagh, Eoin C; Toomey, Rachel J
2016-07-01
Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article.
NASA Astrophysics Data System (ADS)
Chen, Daqiang; Shen, Xiahong; Tong, Bing; Zhu, Xiaoxiao; Feng, Tao
With the increasing competition in logistics industry and promotion of lower logistics costs requirements, the construction of logistics information matching platform for highway transportation plays an important role, and the accuracy of platform design is the key to successful operation or not. Based on survey results of logistics service providers, customers and regulation authorities to access to information and in-depth information demand analysis of logistics information matching platform for highway transportation in Zhejiang province, a survey analysis for framework of logistics information matching platform for highway transportation is provided.
Neural networks for structural design - An integrated system implementation
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hafez, Wassim; Pao, Yoh-Han
1992-01-01
The development of powerful automated procedures to aid the creative designer is becoming increasingly critical for complex design tasks. In the work described here Artificial Neural Nets are applied to acquire structural analysis and optimization domain expertise. Based on initial instructions from the user an automated procedure generates random instances of structural analysis and/or optimization 'experiences' that cover a desired domain. It extracts training patterns from the created instances, constructs and trains an appropriate network architecture and checks the accuracy of net predictions. The final product is a trained neural net that can estimate analysis and/or optimization results instantaneously.
The detection methods of dynamic objects
NASA Astrophysics Data System (ADS)
Knyazev, N. L.; Denisova, L. A.
2018-01-01
The article deals with the application of cluster analysis methods for solving the task of aircraft detection on the basis of distribution of navigation parameters selection into groups (clusters). The modified method of cluster analysis for search and detection of objects and then iterative combining in clusters with the subsequent count of their quantity for increase in accuracy of the aircraft detection have been suggested. The course of the method operation and the features of implementation have been considered. In the conclusion the noted efficiency of the offered method for exact cluster analysis for finding targets has been shown.
Capacitance changes in frog skin caused by theophylline and antidiuretic hormone.
Cuthbert, A W; Painter, E
1969-09-01
1. Impedance loci for frog skins have been calculated by computer analysis from voltage transients developed across the tissues.2. Attention has been paid to simultaneous changes in conductance and capacitance of skins treated either with antidiuretic hormone (ADH) or with theophylline. These drugs always caused an increase in conductance and usually the skin capacitance also increased. However, changes in conductance were not correlated with capacitance changes.3. Changes in capacitance caused by the drugs may represent pore formation in the barrier to water flow, since both drugs increase hydro-osmotic flow in epithelia. If this interpretation is correct, then 0.14% of the membrane area forms water-permeable pores in response to a maximal dose of ADH. This value is somewhat less than the value obtained previously (0.3%) by graphical analysis.4. A theoretical account is given of the relative accuracy of the computer method and the graphical method for voltage transient analysis.
O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor
2012-08-01
Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.
NASA Astrophysics Data System (ADS)
Wang, Liping; Jiang, Yao; Li, Tiemin
2014-09-01
Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.
Evaluation of scanning 2D barcoded vaccines to improve data accuracy of vaccines administered.
Daily, Ashley; Kennedy, Erin D; Fierro, Leslie A; Reed, Jenica Huddleston; Greene, Michael; Williams, Warren W; Evanson, Heather V; Cox, Regina; Koeppl, Patrick; Gerlach, Ken
2016-11-11
Accurately recording vaccine lot number, expiration date, and product identifiers, in patient records is an important step in improving supply chain management and patient safety in the event of a recall. These data are being encoded on two-dimensional (2D) barcodes on most vaccine vials and syringes. Using electronic vaccine administration records, we evaluated the accuracy of lot number and expiration date entered using 2D barcode scanning compared to traditional manual or drop-down list entry methods. We analyzed 128,573 electronic records of vaccines administered at 32 facilities. We compared the accuracy of records entered using 2D barcode scanning with those entered using traditional methods using chi-square tests and multilevel logistic regression. When 2D barcodes were scanned, lot number data accuracy was 1.8 percentage points higher (94.3-96.1%, P<0.001) and expiration date data accuracy was 11 percentage points higher (84.8-95.8%, P<0.001) compared with traditional methods. In multivariate analysis, lot number was more likely to be accurate (aOR=1.75; 99% CI, 1.57-1.96) as was expiration date (aOR=2.39; 99% CI, 2.12-2.68). When controlling for scanning and other factors, manufacturer, month vaccine was administered, and vaccine type were associated with variation in accuracy for both lot number and expiration date. Two-dimensional barcode scanning shows promise for improving data accuracy of vaccine lot number and expiration date records. Adapting systems to further integrate with 2D barcoding could help increase adoption of 2D barcode scanning technology. Published by Elsevier Ltd.
Forecast models for suicide: Time-series analysis with data from Italy.
Preti, Antonio; Lentini, Gianluca
2016-01-01
The prediction of suicidal behavior is a complex task. To fine-tune targeted preventative interventions, predictive analytics (i.e. forecasting future risk of suicide) is more important than exploratory data analysis (pattern recognition, e.g. detection of seasonality in suicide time series). This study sets out to investigate the accuracy of forecasting models of suicide for men and women. A total of 101 499 male suicides and of 39 681 female suicides - occurred in Italy from 1969 to 2003 - were investigated. In order to apply the forecasting model and test its accuracy, the time series were split into a training set (1969 to 1996; 336 months) and a test set (1997 to 2003; 84 months). The main outcome was the accuracy of forecasting models on the monthly number of suicides. These measures of accuracy were used: mean absolute error; root mean squared error; mean absolute percentage error; mean absolute scaled error. In both male and female suicides a change in the trend pattern was observed, with an increase from 1969 onwards to reach a maximum around 1990 and decrease thereafter. The variances attributable to the seasonal and trend components were, respectively, 24% and 64% in male suicides, and 28% and 41% in female ones. Both annual and seasonal historical trends of monthly data contributed to forecast future trends of suicide with a margin of error around 10%. The finding is clearer in male than in female time series of suicide. The main conclusion of the study is that models taking seasonality into account seem to be able to derive information on deviation from the mean when this occurs as a zenith, but they fail to reproduce it when it occurs as a nadir. Preventative efforts should concentrate on the factors that influence the occurrence of increases above the main trend in both seasonal and cyclic patterns of suicides.
NASA Technical Reports Server (NTRS)
Evans, F. A.
1978-01-01
Space shuttle orbiter/IUS alignment transfer was evaluated. Although the orbiter alignment accuracy was originally believed to be the major contributor to the overall alignment transfer error, it was shown that orbiter alignment accuracy is not a factor affecting IUS alignment accuracy, if certain procedures are followed. Results are reported of alignment transfer accuracy analysis.
Abstract for poster presentation:
Site-specific accuracy assessments evaluate fine-scale accuracy of land-use/land-cover(LULC) datasets but provide little insight into accuracy of area estimates of LULC
classes derived from sampling units of varying size. Additiona...
A scoring algorithm for predicting the presence of adult asthma: a prospective derivation study.
Tomita, Katsuyuki; Sano, Hiroyuki; Chiba, Yasutaka; Sato, Ryuji; Sano, Akiko; Nishiyama, Osamu; Iwanaga, Takashi; Higashimoto, Yuji; Haraguchi, Ryuta; Tohda, Yuji
2013-03-01
To predict the presence of asthma in adult patients with respiratory symptoms, we developed a scoring algorithm using clinical parameters. We prospectively analysed 566 adult outpatients who visited Kinki University Hospital for the first time with complaints of nonspecific respiratory symptoms. Asthma was comprehensively diagnosed by specialists using symptoms, signs, and objective tools including bronchodilator reversibility and/or the assessment of bronchial hyperresponsiveness (BHR). Multiple logistic regression analysis was performed to categorise patients and determine the accuracy of diagnosing asthma. A scoring algorithm using the symptom-sign score was developed, based on diurnal variation of symptoms (1 point), recurrent episodes (2 points), medical history of allergic diseases (1 point), and wheeze sound (2 points). A score of >3 had 35% sensitivity and 97% specificity for discriminating between patients with and without asthma and assigned a high probability of having asthma (accuracy 90%). A score of 1 or 2 points assigned intermediate probability (accuracy 68%). After providing additional data of forced expiratory volume in 1 second/forced vital capacity (FEV(1)/FVC) ratio <0.7, the post-test probability of having asthma was increased to 93%. A score of 0 points assigned low probability (accuracy 31%). After providing additional data of positive reversibility, the post-test probability of having asthma was increased to 88%. This pragmatic diagnostic algorithm is useful for predicting the presence of adult asthma and for determining the appropriate time for consultation with a pulmonologist.
A kinematic analysis of visually-guided movement in Williams syndrome.
Hocking, Darren R; Rinehart, Nicole J; McGinley, Jennifer L; Moss, Simon A; Bradshaw, John L
2011-02-15
Previous studies have reported that people with the neurodevelopmental disorder Williams syndrome exhibit difficulties with visuomotor control. In the current study, we examined the extent to which visuomotor deficits were associated with movement planning or feedback-based on-line control. We used a variant of the Fitts' reciprocal aiming task on a computerized touchscreen in adults with WS, IQ-matched individuals with Down syndrome (DS), and typically developing controls. By manipulating task difficulty both as a function of target size and amplitude, we were able to vary the requirements for accuracy to examine processes associated with dorsal visual stream and cerebellar functioning. Although a greater increase in movement time as a function of task difficulty was observed in the two clinical groups with WS and DS, greater magnitude in the late kinematic components of movement-specifically, time after peak velocity-was revealed in the WS group during increased demands for accuracy. In contrast, the DS group showed a greater speed-accuracy trade-off with significantly reduced and more variable endpoint accuracy, which may be associated with cerebellar deficits. In addition, the WS group spent more time stationary in the target when task-related features reflected a higher level of difficulty, suggestive of specific deficits in movement planning. Our results indicate that the visuomotor coordination deficits in WS may reflect known impairments of the dorsal stream, but may also indicate a role for the cerebellum in dynamic feed-forward motor control. Copyright © 2010 Elsevier B.V. All rights reserved.
Howie, Bryan N.; Donnelly, Peter; Marchini, Jonathan
2009-01-01
Genotype imputation methods are now being widely used in the analysis of genome-wide association studies. Most imputation analyses to date have used the HapMap as a reference dataset, but new reference panels (such as controls genotyped on multiple SNP chips and densely typed samples from the 1,000 Genomes Project) will soon allow a broader range of SNPs to be imputed with higher accuracy, thereby increasing power. We describe a genotype imputation method (IMPUTE version 2) that is designed to address the challenges presented by these new datasets. The main innovation of our approach is a flexible modelling framework that increases accuracy and combines information across multiple reference panels while remaining computationally feasible. We find that IMPUTE v2 attains higher accuracy than other methods when the HapMap provides the sole reference panel, but that the size of the panel constrains the improvements that can be made. We also find that imputation accuracy can be greatly enhanced by expanding the reference panel to contain thousands of chromosomes and that IMPUTE v2 outperforms other methods in this setting at both rare and common SNPs, with overall error rates that are 15%–20% lower than those of the closest competing method. One particularly challenging aspect of next-generation association studies is to integrate information across multiple reference panels genotyped on different sets of SNPs; we show that our approach to this problem has practical advantages over other suggested solutions. PMID:19543373
The urine dipstick test useful to rule out infections. A meta-analysis of the accuracy
Devillé, Walter LJM; Yzermans, Joris C; van Duijn, Nico P; Bezemer, P Dick; van der Windt, Daniëlle AWM; Bouter, Lex M
2004-01-01
Background Many studies have evaluated the accuracy of dipstick tests as rapid detectors of bacteriuria and urinary tract infections (UTI). The lack of an adequate explanation for the heterogeneity of the dipstick accuracy stimulates an ongoing debate. The objective of the present meta-analysis was to summarise the available evidence on the diagnostic accuracy of the urine dipstick test, taking into account various pre-defined potential sources of heterogeneity. Methods Literature from 1990 through 1999 was searched in Medline and Embase, and by reference tracking. Selected publications should be concerned with the diagnosis of bacteriuria or urinary tract infections, investigate the use of dipstick tests for nitrites and/or leukocyte esterase, and present empirical data. A checklist was used to assess methodological quality. Results 70 publications were included. Accuracy of nitrites was high in pregnant women (Diagnostic Odds Ratio = 165) and elderly people (DOR = 108). Positive predictive values were ≥80% in elderly and in family medicine. Accuracy of leukocyte-esterase was high in studies in urology patients (DOR = 276). Sensitivities were highest in family medicine (86%). Negative predictive values were high in both tests in all patient groups and settings, except for in family medicine. The combination of both test results showed an important increase in sensitivity. Accuracy was high in studies in urology patients (DOR = 52), in children (DOR = 46), and if clinical information was present (DOR = 28). Sensitivity was highest in studies carried out in family medicine (90%). Predictive values of combinations of positive test results were low in all other situations. Conclusions Overall, this review demonstrates that the urine dipstick test alone seems to be useful in all populations to exclude the presence of infection if the results of both nitrites and leukocyte-esterase are negative. Sensitivities of the combination of both tests vary between 68 and 88% in different patient groups, but positive test results have to be confirmed. Although the combination of positive test results is very sensitive in family practice, the usefulness of the dipstick test alone to rule in infection remains doubtful, even with high pre-test probabilities. PMID:15175113
Childs, Paul; Wong, Allan C L; Fu, H Y; Liao, Yanbiao; Tam, Hwayaw; Lu, Chao; Wai, P K A
2010-12-20
We measured the hydrostatic pressure dependence of the birefringence and birefringent dispersion of a Sagnac interferometric sensor incorporating a length of highly birefringent photonic crystal fiber using Fourier analysis. Sensitivity of both the phase and chirp spectra to hydrostatic pressure is demonstrated. Using this analysis, phase-based measurements showed a good linearity with an effective sensitivity of 9.45 nm/MPa and an accuracy of ±7.8 kPa using wavelength-encoded data and an effective sensitivity of -55.7 cm(-1)/MPa and an accuracy of ±4.4 kPa using wavenumber-encoded data. Chirp-based measurements, though nonlinear in response, showed an improvement in accuracy at certain pressure ranges with an accuracy of ±5.5 kPa for the full range of measured pressures using wavelength-encoded data and dropping to within ±2.5 kPa in the range of 0.17 to 0.4 MPa using wavenumber-encoded data. Improvements of the accuracy demonstrated the usefulness of implementing chirp-based analysis for sensing purposes.
Small rural hospitals: an example of market segmentation analysis.
Mainous, A G; Shelby, R L
1991-01-01
In recent years, market segmentation analysis has shown increased popularity among health care marketers, although marketers tend to focus upon hospitals as sellers. The present analysis suggests that there is merit to viewing hospitals as a market of consumers. Employing a random sample of 741 small rural hospitals, the present investigation sought to determine, through the use of segmentation analysis, the variables associated with hospital success (occupancy). The results of a discriminant analysis yielded a model which classifies hospitals with a high degree of predictive accuracy. Successful hospitals have more beds and employees, and are generally larger and have more resources. However, there was no significant relationship between organizational success and number of services offered by the institution.
NASA Astrophysics Data System (ADS)
Xu, Jing; Wang, Yu-Tian; Liu, Xiao-Fei
2015-04-01
Edible blend oil is a mixture of vegetable oils. Eligible blend oil can meet the daily need of two essential fatty acids for human to achieve the balanced nutrition. Each vegetable oil has its different composition, so vegetable oils contents in edible blend oil determine nutritional components in blend oil. A high-precision quantitative analysis method to detect the vegetable oils contents in blend oil is necessary to ensure balanced nutrition for human being. Three-dimensional fluorescence technique is high selectivity, high sensitivity, and high-efficiency. Efficiency extraction and full use of information in tree-dimensional fluorescence spectra will improve the accuracy of the measurement. A novel quantitative analysis is proposed based on Quasi-Monte-Carlo integral to improve the measurement sensitivity and reduce the random error. Partial least squares method is used to solve nonlinear equations to avoid the effect of multicollinearity. The recovery rates of blend oil mixed by peanut oil, soybean oil and sunflower are calculated to verify the accuracy of the method, which are increased, compared the linear method used commonly for component concentration measurement.
2018-01-01
Background and Objective. Needle electromyography can be used to detect the number of changes and morphological changes in motor unit potentials of patients with axonal neuropathy. General mathematical methods of pattern recognition and signal analysis were applied to recognize neuropathic changes. This study validates the possibility of extending and refining turns-amplitude analysis using permutation entropy and signal energy. Methods. In this study, we examined needle electromyography in 40 neuropathic individuals and 40 controls. The number of turns, amplitude between turns, signal energy, and “permutation entropy” were used as features for support vector machine classification. Results. The obtained results proved the superior classification performance of the combinations of all of the above-mentioned features compared to the combinations of fewer features. The lowest accuracy from the tested combinations of features had peak-ratio analysis. Conclusion. Using the combination of permutation entropy with signal energy, number of turns and mean amplitude in SVM classification can be used to refine the diagnosis of polyneuropathies examined by needle electromyography. PMID:29606959
Automation of fluorescent differential display with digital readout.
Meade, Jonathan D; Cho, Yong-Jig; Fisher, Jeffrey S; Walden, Jamie C; Guo, Zhen; Liang, Peng
2006-01-01
Since its invention in 1992, differential display (DD) has become the most commonly used technique for identifying differentially expressed genes because of its many advantages over competing technologies such as DNA microarray, serial analysis of gene expression (SAGE), and subtractive hybridization. Despite the great impact of the method on biomedical research, there has been a lack of automation of DD technology to increase its throughput and accuracy for systematic gene expression analysis. Most of previous DD work has taken a "shot-gun" approach of identifying one gene at a time, with a limited number of polymerase chain reaction (PCR) reactions set up manually, giving DD a low-tech and low-throughput image. We have optimized the DD process with a new platform that incorporates fluorescent digital readout, automated liquid handling, and large-format gels capable of running entire 96-well plates. The resulting streamlined fluorescent DD (FDD) technology offers an unprecedented accuracy, sensitivity, and throughput in comprehensive and quantitative analysis of gene expression. These major improvements will allow researchers to find differentially expressed genes of interest, both known and novel, quickly and easily.
Prostate lesion detection and localization based on locality alignment discriminant analysis
NASA Astrophysics Data System (ADS)
Lin, Mingquan; Chen, Weifu; Zhao, Mingbo; Gibson, Eli; Bastian-Jordan, Matthew; Cool, Derek W.; Kassam, Zahra; Chow, Tommy W. S.; Ward, Aaron; Chiu, Bernard
2017-03-01
Prostatic adenocarcinoma is one of the most commonly occurring cancers among men in the world, and it also the most curable cancer when it is detected early. Multiparametric MRI (mpMRI) combines anatomic and functional prostate imaging techniques, which have been shown to produce high sensitivity and specificity in cancer localization, which is important in planning biopsies and focal therapies. However, in previous investigations, lesion localization was achieved mainly by manual segmentation, which is time-consuming and prone to observer variability. Here, we developed an algorithm based on locality alignment discriminant analysis (LADA) technique, which can be considered as a version of linear discriminant analysis (LDA) localized to patches in the feature space. Sensitivity, specificity and accuracy generated by the proposed algorithm in five prostates by LADA were 52.2%, 89.1% and 85.1% respectively, compared to 31.3%, 85.3% and 80.9% generated by LDA. The delineation accuracy attainable by this tool has a potential in increasing the cancer detection rate in biopsies and in minimizing collateral damage of surrounding tissues in focal therapies.
Absolute shape measurements using high-resolution optoelectronic holography methods
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2000-01-01
Characterization of surface shape and deformation is of primary importance in a number of testing and metrology applications related to the functionality, performance, and integrity of components. In this paper, a unique, compact, and versatile state-of-the-art fiber-optic-based optoelectronic holography (OEH) methodology is described. This description addresses apparatus and analysis algorithms, especially developed to perform measurements of both absolute surface shape and deformation. The OEH can be arranged in multiple configurations, which include the three-camera, three-illumination, and in-plane speckle correlation setups. With the OEH apparatus and analysis algorithms, absolute shape measurements can be made, using present setup, with a spatial resolution and accuracy of better than 30 and 10 micrometers , respectively, for volumes characterized by a 300-mm length. Optimizing the experimental setup and incorporating equipment, as it becomes available, having superior capabilities to the ones utilized in the present investigations can further increase resolution and accuracy in the measurements. The particular feature of this methodology is its capability to export the measurements data directly into CAD environments for subsequent processing, analysis, and definition of CAD/CAE models.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
NASA Astrophysics Data System (ADS)
Shokravi, H.; Bakhary, NH
2017-11-01
Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.
NASA Astrophysics Data System (ADS)
Kimuli, Daniel; Wang, Wei; Wang, Wei; Jiang, Hongzhe; Zhao, Xin; Chu, Xuan
2018-03-01
A short-wave infrared (SWIR) hyperspectral imaging system (1000-2500 nm) combined with chemometric data analysis was used to detect aflatoxin B1 (AFB1) on surfaces of 600 kernels of four yellow maize varieties from different States of the USA (Georgia, Illinois, Indiana and Nebraska). For each variety, four AFB1 solutions (10, 20, 100 and 500 ppb) were artificially deposited on kernels and a control group was generated from kernels treated with methanol solution. Principal component analysis (PCA), partial least squares discriminant analysis (PLSDA) and factorial discriminant analysis (FDA) were applied to explore and classify maize kernels according to AFB1 contamination. PCA results revealed partial separation of control kernels from AFB1 contaminated kernels for each variety while no pattern of separation was observed among pooled samples. A combination of standard normal variate and first derivative pre-treatments produced the best PLSDA classification model with accuracy of 100% and 96% in calibration and validation, respectively, from Illinois variety. The best AFB1 classification results came from FDA on raw spectra with accuracy of 100% in calibration and validation for Illinois and Nebraska varieties. However, for both PLSDA and FDA models, poor AFB1 classification results were obtained for pooled samples relative to individual varieties. SWIR spectra combined with chemometrics and spectra pre-treatments showed the possibility of detecting maize kernels of different varieties coated with AFB1. The study further suggests that increase of maize kernel constituents like water, protein, starch and lipid in a pooled sample may have influence on detection accuracy of AFB1 contamination.
NASA Astrophysics Data System (ADS)
Dondurur, Mehmet
The primary objective of this study was to determine the degree to which modern SAR systems can be used to obtain information about the Earth's vegetative resources. Information obtainable from microwave synthetic aperture radar (SAR) data was compared with that obtainable from LANDSAT-TM and SPOT data. Three hypotheses were tested: (a) Classification of land cover/use from SAR data can be accomplished on a pixel-by-pixel basis with the same overall accuracy as from LANDSAT-TM and SPOT data. (b) Classification accuracy for individual land cover/use classes will differ between sensors. (c) Combining information derived from optical and SAR data into an integrated monitoring system will improve overall and individual land cover/use class accuracies. The study was conducted with three data sets for the Sleeping Bear Dunes test site in the northwestern part of Michigan's lower peninsula, including an October 1982 LANDSAT-TM scene, a June 1989 SPOT scene and C-, L- and P-Band radar data from the Jet Propulsion Laboratory AIRSAR. Reference data were derived from the Michigan Resource Information System (MIRIS) and available color infrared aerial photos. Classification and rectification of data sets were done using ERDAS Image Processing Programs. Classification algorithms included Maximum Likelihood, Mahalanobis Distance, Minimum Spectral Distance, ISODATA, Parallelepiped, and Sequential Cluster Analysis. Classified images were rectified as necessary so that all were at the same scale and oriented north-up. Results were analyzed with contingency tables and percent correctly classified (PCC) and Cohen's Kappa (CK) as accuracy indices using CSLANT and ImagePro programs developed for this study. Accuracy analyses were based upon a 1.4 by 6.5 km area with its long axis east-west. Reference data for this subscene total 55,770 15 by 15 m pixels with sixteen cover types, including seven level III forest classes, three level III urban classes, two level II range classes, two water classes, one wetland class and one agriculture class. An initial analysis was made without correcting the 1978 MIRIS reference data to the different dates of the TM, SPOT and SAR data sets. In this analysis, highest overall classification accuracy (PCC) was 87% with the TM data set, with both SPOT and C-Band SAR at 85%, a difference statistically significant at the 0.05 level. When the reference data were corrected for land cover change between 1978 and 1991, classification accuracy with the C-Band SAR data increased to 87%. Classification accuracy differed from sensor to sensor for individual land cover classes, Combining sensors into hypothetical multi-sensor systems resulted in higher accuracies than for any single sensor. Combining LANDSAT -TM and C-Band SAR yielded an overall classification accuracy (PCC) of 92%. The results of this study indicate that C-Band SAR data provide an acceptable substitute for LANDSAT-TM or SPOT data when land cover information is desired of areas where cloud cover obscures the terrain. Even better results can be obtained by integrating TM and C-Band SAR data into a multi-sensor system.
Finite element mesh refinement criteria for stress analysis
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1990-01-01
This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.
Richman, Susan D; Fairley, Jennifer; Butler, Rachel; Deans, Zandra C
2017-12-01
Evidence strongly indicates that extended RAS testing should be undertaken in mCRC patients, prior to prescribing anti-EGFR therapies. With more laboratories implementing testing, the requirement for External Quality Assurance schemes increases, thus ensuring high standards of molecular analysis. Data was analysed from 15 United Kingdom National External Quality Assessment Service (UK NEQAS) for Molecular Genetics Colorectal cancer external quality assurance (EQA) schemes, delivered between 2009 and 2016. Laboratories were provided annually with nine colorectal tumour samples for genotyping. Information on methodology and extent of testing coverage was requested, and scores given for genotyping, interpretation and clerical accuracy. There has been a sixfold increase in laboratory participation (18 in 2009 to 108 in 2016). For RAS genotyping, fewer laboratories now use Roche cobas®, pyrosequencing and Sanger sequencing, with more moving to next generation sequencing (NGS). NGS is the most commonly employed technology for BRAF and PIK3CA mutation screening. KRAS genotyping errors were seen in ≤10% laboratories, until the 2014-2015 scheme, when there was an increase to 16.7%, corresponding to a large increase in scheme participants. NRAS genotyping errors peaked at 25.6% in the first 2015-2016 scheme but subsequently dropped to below 5%. Interpretation and clerical accuracy scores have been consistently good throughout. Within this EQA scheme, we have observed that the quality of molecular analysis for colorectal cancer has continued to improve, despite changes in the required targets, the volume of testing and the technologies employed. It is reassuring to know that laboratories clearly recognise the importance of participating in EQA schemes.
Le Roux, Ronan
2015-04-01
The paper deals with the introduction of nanotechnology in biochips. Based on interviews and theoretical reflections, it explores blind spots left by technology assessment and ethical investigations. These have focused on possible consequences of increased diffusability of a diagnostic device, neglecting both the context of research as well as increased accuracy, despite it being a more essential feature of nanobiochip projects. Also, rather than one of many parallel aspects (technical, legal and social) in innovation processes, ethics is considered here as a ubiquitous system of choices between sometimes antagonistic values. Thus, the paper investigates what is at stake when accuracy is balanced with other practical values in different contexts. Dramatic nanotechnological increase of accuracy in biochips can raise ethical issues, since it is at odds with other values such as diffusability and reliability. But those issues will not be as revolutionary as is often claimed: neither in diagnostics, because accuracy of measurements is not accuracy of diagnostics; nor in research, because a boost in measurement accuracy is not sufficient to overcome significance-chasing malpractices. The conclusion extends to methodological recommendations.
Social Power Increases Interoceptive Accuracy
Moeini-Jazani, Mehrad; Knoeferle, Klemens; de Molière, Laura; Gatti, Elia; Warlop, Luk
2017-01-01
Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research. PMID:28824501
Accuracy of remotely sensed data: Sampling and analysis procedures
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Oderwald, R. G.; Mead, R. A.
1982-01-01
A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.
Occupational exposure decisions: can limited data interpretation training help improve accuracy?
Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul
2009-06-01
Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.
NASA Astrophysics Data System (ADS)
George, Rohini
Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution function could be approximated to a normal distribution function. A statistical analysis was also performed to investigate if a patient's physical, tumor or general characteristics played a role in identifying whether he/she responded positively to the coaching type---signified by a reduction in the variability of respiratory motion. The analysis demonstrated that, although there were some characteristics like disease type and dose per fraction that were significant with respect to time-independent analysis, there were no significant time trends observed for the inter-session or intra-session analysis. Based on patient feedback with the existing audio-visual biofeedback system used for the study and research performed on other feedback systems, an improved audio-visual biofeedback system was designed. It is hoped the widespread clinical implementation of audio-visual biofeedback for radiotherapy will improve the accuracy of lung cancer radiotherapy.
ERIC Educational Resources Information Center
Rhodes, M.G.; Kelley, C.M.
2005-01-01
The current study examined the neuropsychological correlates of memory accuracy in older and younger adults. Participants were tested in a memory monitoring paradigm developed by Koriat and Goldsmith (1996), which permits separate assessments of the accuracy of responses generated during retrieval and the accuracy of monitoring those responses.…
NASA Astrophysics Data System (ADS)
Zhang, Liang; Yang, Hongzhou; Gao, Yang; Yao, Yibin; Xu, Chaoqian
2018-06-01
To meet the increasing demands from the real-time Precise Point Positioning (PPP) users, the real-time satellite orbit and clock products are generated by different International GNSS Service (IGS) real-time analysis centers and can be publicly received through the Internet. Based on different data sources and processing strategies, the real-time products from different analysis centers therefore differ in availability and accuracy. The main objective of this paper is to evaluate availability and accuracy of different real-time products and their effects on real-time PPP. A total of nine commonly used Real-Time Service (RTS) products, namely IGS01, IGS03, CLK01, CLK15, CLK22, CLK52, CLK70, CLK81 and CLK90, will be evaluated in this paper. Because not all RTS products support multi-GNSS, only GPS products are analyzed in this paper. Firstly, the availability of all RTS products is analyzed in two levels. The first level is the epoch availability, indicating whether there is outage for that epoch. The second level is the satellite availability, which defines the available satellite number for each epoch. Then the accuracy of different RTS products is investigated on nominal accuracy and the accuracy degradation over time. Results show that Root-Mean-Square Error (RMSE) of satellite orbit ranges from 3.8 cm to 7.5 cm for different RTS products. While the mean Standard Deviations of Errors (STDE) of satellite clocks range from 1.9 cm to 5.6 cm. The modified Signal In Space Range Error (SISRE) for all products are from 1.3 cm to 5.5 cm for different RTS products. The accuracy degradation of the orbit has the linear trend for all RTS products and the satellite clock degradation depends on the satellite clock types. The Rb clocks on board of GPS IIF satellites have the smallest degradation rate of less than 3 cm over 10 min while the Cs clocks on board of GPS IIF have the largest degradation rate of more than 10 cm over 10 min. Finally, the real-time kinematic PPP is carried out to investigate the effects of different real-time products. The CLK90 has the best performance and mean RMSE of 26 globally distributed IGS stations in three components are 3.2 cm, 6.6 cm and 8.5 cm. And the second-best positioning results are using IGS03 products.
Reiman, Michael P; Thorborg, Kristian; Goode, Adam P; Cook, Chad E; Weir, Adam; Hölmich, Per
2017-09-01
Diagnosing femoroacetabular impingement/acetabular labral tear (FAI/ALT) and subsequently making a decision regarding surgery are based primarily on diagnostic imaging and intra-articular hip joint injection techniques of unknown accuracy. Summarize and evaluate the diagnostic accuracy and clinical utility of various imaging modalities and injection techniques relevant to hip FAI/ALT. Systematic review with meta-analysis. A computer-assisted literature search was conducted of MEDLINE, CINAHL, and EMBASE databases using keywords related to diagnostic accuracy of hip joint pathologic changes. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were used for the search and reporting phases of the study. Quality assessment of bias and applicability was conducted using the Quality of Diagnostic Accuracy Studies (QUADAS) tool. Random effects models were used to summarize sensitivities (SN), specificities (SP), likelihood ratios (+LR and -LR), diagnostic odds ratios (DOR), and respective confidence intervals (CI). The search strategy and assessment for risk of bias revealed 25 articles scoring above 10/14 on the items of the QUADAS. Four studies investigated FAI, and the data were not pooled. Twenty articles on ALT qualified for meta-analysis. Pretest probability of ALT in the studies in this review was 81% (72%-88%), while the pretest probability of FAI diagnosis was 74% (95% CI, 51%-91%). The meta-analysis showed that computed tomography arthrography (CTA) demonstrated the strongest overall diagnostic accuracy: pooled SN 0.91 (95% CI, 0.83-0.96); SP 0.89 (95% CI, 0.74-0.97); +LR 6.28 (95% CI, 2.78-14.21); -LR 0.11 (95% CI, 0.06-0.21); and DOR 64.38 (95% CI, 19.17-216.21). High pretest probability of disease was demonstrated. Positive imaging findings increased the probability that a labral tear existed by a minimal to small degree with the use of magnetic resonance imaging/magnetic resonance angiogram (MRI/MRA) and ultrasound (US) and by a moderate degree for CTA. Negative imaging findings decreased the probability that a labral tear existed by a minimal degree with the use of MRI and US, a small to moderate degree with MRA, and a moderate degree with CTA. Although findings of the included studies suggested potentially favorable use of these modalities for the diagnosis of ALT and FAI, our results suggest that these findings have limited generalizability and clinical utility given very high pretest prevalence, large confidence intervals, and selection criteria of the studies. Registration: PROSPERO Registration #CRD42015027745.
Fusion of pan-tropical biomass maps using weighted averaging and regional calibration data
NASA Astrophysics Data System (ADS)
Ge, Yong; Avitabile, Valerio; Heuvelink, Gerard B. M.; Wang, Jianghao; Herold, Martin
2014-09-01
Biomass is a key environmental variable that influences many biosphere-atmosphere interactions. Recently, a number of biomass maps at national, regional and global scales have been produced using different approaches with a variety of input data, such as from field observations, remotely sensed imagery and other spatial datasets. However, the accuracy of these maps varies regionally and is largely unknown. This research proposes a fusion method to increase the accuracy of regional biomass estimates by using higher-quality calibration data. In this fusion method, the biases in the source maps were first adjusted to correct for over- and underestimation by comparison with the calibration data. Next, the biomass maps were combined linearly using weights derived from the variance-covariance matrix associated with the accuracies of the source maps. Because each map may have different biases and accuracies for different land use types, the biases and fusion weights were computed for each of the main land cover types separately. The conceptual arguments are substantiated by a case study conducted in East Africa. Evaluation analysis shows that fusing multiple source biomass maps may produce a more accurate map than when only one biomass map or unweighted averaging is used.
Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field
Luo, Gang; Min, Wanli
2007-01-01
Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884
Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min
2015-01-01
This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically.
Accuracy of force and center of pressure measures of the Wii Balance Board.
Bartlett, Harrison L; Ting, Lena H; Bingham, Jeffrey T
2014-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ± 9.1N, and of COP location within ± 4.1mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. Published by Elsevier B.V.
Accuracy of force and center of pressure measures of the Wii Balance Board
Bartlett, Harrison L.; Ting, Lena H.; Bingham, Jeffrey T.
2013-01-01
The Nintendo Wii Balance Board (WBB) is increasingly used as an inexpensive force plate for assessment of postural control; however, no documentation of force and COP accuracy and reliability is publicly available. Therefore, we performed a standard measurement uncertainty analysis on 3 lightly and 6 heavily used WBBs to provide future users with information about the repeatability and accuracy of the WBB force and COP measurements. Across WBBs, we found the total uncertainty of force measurements to be within ±9.1 N, and of COP location within ±4.1 mm. However, repeatability of a single measurement within a board was better (4.5 N, 1.5 mm), suggesting that the WBB is best used for relative measures using the same device, rather than absolute measurement across devices. Internally stored calibration values were comparable to those determined experimentally. Further, heavy wear did not significantly degrade performance. In combination with prior evaluation of WBB performance and published standards for measuring human balance, our study provides necessary information to evaluate the use of the WBB for analysis of human balance control. We suggest the WBB may be useful for low-resolution measurements, but should not be considered as a replacement for laboratory-grade force plates. PMID:23910725
Distinct developmental profiles in typical speech acquisition
Campbell, Thomas F.; Shriberg, Lawrence D.; Green, Jordan R.; Abdi, Hervé; Rusiewicz, Heather Leavy; Venkatesh, Lakshmi; Moore, Christopher A.
2012-01-01
Three- to five-year-old children produce speech that is characterized by a high level of variability within and across individuals. This variability, which is manifest in speech movements, acoustics, and overt behaviors, can be input to subgroup discovery methods to identify cohesive subgroups of speakers or to reveal distinct developmental pathways or profiles. This investigation characterized three distinct groups of typically developing children and provided normative benchmarks for speech development. These speech development profiles, identified among 63 typically developing preschool-aged speakers (ages 36–59 mo), were derived from the children's performance on multiple measures. These profiles were obtained by submitting to a k-means cluster analysis of 72 measures that composed three levels of speech analysis: behavioral (e.g., task accuracy, percentage of consonants correct), acoustic (e.g., syllable duration, syllable stress), and kinematic (e.g., variability of movements of the upper lip, lower lip, and jaw). Two of the discovered group profiles were distinguished by measures of variability but not by phonemic accuracy; the third group of children was characterized by their relatively low phonemic accuracy but not by an increase in measures of variability. Analyses revealed that of the original 72 measures, 8 key measures were sufficient to best distinguish the 3 profile groups. PMID:22357794
Gutiérrez-Clellen, Vera F.; Simon-Cereijido, Gabriela
2012-01-01
Current language tests designed to assess Spanish-English-speaking children have limited clinical accuracy and do not provide sufficient information to plan language intervention. In contrast, spontaneous language samples obtained in the two languages can help identify language impairment with higher accuracy. In this article, we describe several diagnostic indicators that can be used in language assessments based on spontaneous language samples. First, based on previous research with monolingual and bilingual English speakers, we show that a verb morphology composite measure in combination with a measure of mean length of utterance (MLU) can provide valuable diagnostic information for English development in bilingual children. Dialectal considerations are discussed. Second, we discuss the available research with bilingual Spanish speakers and show a series of procedures to be used for the analysis of Spanish samples: (a) limited MLU and proportional use of ungrammatical utterances; (b) limited grammatical accuracy on articles, verbs, and clitic pronouns; and (c) limited MLU, omission of theme arguments, and limited use of ditransitive verbs. Third, we illustrate the analysis of verb argument structure using a rubric as an assessment tool. Estimated scores on morphological and syntactic measures are expected to increase the sensitivity of clinical assessments with young bilingual children. Further research using other measures of language will be needed for older school-age children. PMID:19851951
NASA Astrophysics Data System (ADS)
Ito, Yukihiro; Natsu, Wataru; Kunieda, Masanori
This paper describes the influences of anisotropy found in the elastic modulus of monocrystalline silicon wafers on the measurement accuracy of the three-point-support inverting method which can measure the warp and thickness of thin large panels simultaneously. Deflection due to gravity depends on the crystal orientation relative to the positions of the three-point-supports. Thus the deviation of actual crystal orientation from the direction indicated by the notch fabricated on the wafer causes measurement errors. Numerical analysis of the deflection confirmed that the uncertainty of thickness measurement increases from 0.168µm to 0.524µm due to this measurement error. In addition, experimental results showed that the rotation of crystal orientation relative to the three-point-supports is effective for preventing wafer vibration excited by disturbance vibration because the resonance frequency of wafers can be changed. Thus, surface shape measurement accuracy was improved by preventing resonant vibration during measurement.
Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating
NASA Astrophysics Data System (ADS)
Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed
2016-11-01
A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.
Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.
Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing
2016-01-01
The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.
Ngo, L; Ho, H; Hunter, P; Quinn, K; Thomson, A; Pearson, G
2016-02-01
Post-mortem measurements (cold weight, grade and external carcass linear dimensions) as well as live animal data (age, breed, sex) were used to predict ovine primal and retail cut weights for 792 lamb carcases. Significant levels of variance could be explained using these predictors. The predictive power of those measurements on primal and retail cut weights was studied by using the results from principal component analysis and the absolute value of the t-statistics of the linear regression model. High prediction accuracy for primal cut weight was achieved (adjusted R(2) up to 0.95), as well as moderate accuracy for key retail cut weight: tenderloins (adj-R(2)=0.60), loin (adj-R(2)=0.62), French rack (adj-R(2)=0.76) and rump (adj-R(2)=0.75). The carcass cold weight had the best predictive power, with the accuracy increasing by around 10% after including the next three most significant variables. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Discussion of scattering in THz time domain spectrum tests].
Yan, Fang; Zhang, Zhao-hui; Zhao, Xiao-yan; Su, Hai-xia; Li, Zhi; Zhang, Han
2014-06-01
Using THz-TDS to extract the absorption spectrum of a sample is an important branch of various THz applications. Basically, we believe that the THz radiation scatters from sample particles, leading to an obvious baseline increasing with frequencies in its absorption spectrum. The baseline will affect the measurement accuracy due to ambiguous height and pattern of the spectrum. The authors should try to remove the baseline, and eliminate the effects of scattering. In the present paper, we investigated the causes of baselines, reviewed some of scatter mitigating methods and summarized some of research aspects in the future. In order to validate the correctness of these methods, we designed a series of experiments to compare the computational accuracy of molar concentration. The result indicated that the computational accuracy of molar concentration can be improved, which can be the basis of quantitative analysis in further researches. Finally, with comprehensive experimental results, we presented further research directions on THz absorption spectrum that is needed for the removal of scattering effects.
Mancuso, Renzo; Osta, Rosario; Navarro, Xavier
2014-12-01
We assessed the predictive value of electrophysiological tests as a marker of clinical disease onset and survival in superoxide-dismutase 1 (SOD1)(G93A) mice. We evaluated the accuracy of electrophysiological tests in differentiating transgenic versus wild-type mice. We made a correlation analysis of electrophysiological parameters and the onset of symptoms, survival, and number of spinal motoneurons. Presymptomatic electrophysiological tests show great accuracy in differentiating transgenic versus wild-type mice, with the most sensitive parameter being the tibialis anterior compound muscle action potential (CMAP) amplitude. The CMAP amplitude at age 10 weeks correlated significantly with clinical disease onset and survival. Electrophysiological tests increased their survival prediction accuracy when evaluated at later stages of the disease and also predicted the amount of lumbar spinal motoneuron preservation. Electrophysiological tests predict clinical disease onset, survival, and spinal motoneuron preservation in SOD1(G93A) mice. This is a methodological improvement for preclinical studies. © 2014 Wiley Periodicals, Inc.
Chan, Johanna L; Lin, Li; Feiler, Michael; Wolf, Andrew I; Cardona, Diana M; Gellad, Ziad F
2012-11-07
To evaluate accuracy of in vivo diagnosis of adenomatous vs non-adenomatous polyps using i-SCAN digital chromoendoscopy compared with high-definition white light. This is a single-center comparative effectiveness pilot study. Polyps (n = 103) from 75 average-risk adult outpatients undergoing screening or surveillance colonoscopy between December 1, 2010 and April 1, 2011 were evaluated by two participating endoscopists in an academic outpatient endoscopy center. Polyps were evaluated both with high-definition white light and with i-SCAN to make an in vivo prediction of adenomatous vs non-adenomatous pathology. We determined diagnostic characteristics of i-SCAN and high-definition white light, including sensitivity, specificity, and accuracy, with regards to identifying adenomatous vs non-adenomatous polyps. Histopathologic diagnosis was the gold standard comparison. One hundred and three small polyps, detected from forty-three patients, were included in the analysis. The average size of the polyps evaluated in the analysis was 3.7 mm (SD 1.3 mm, range 2 mm to 8 mm). Formal histopathology revealed that 54/103 (52.4%) were adenomas, 26/103 (25.2%) were hyperplastic, and 23/103 (22.3%) were other diagnoses include "lymphoid aggregates", "non-specific colitis," and "no pathologic diagnosis." Overall, the combined accuracy of endoscopists for predicting adenomas was identical between i-SCAN (71.8%, 95%CI: 62.1%-80.3%) and high-definition white light (71.8%, 95%CI: 62.1%-80.3%). However, the accuracy of each endoscopist differed substantially, where endoscopist A demonstrated 63.0% overall accuracy (95%CI: 50.9%-74.0%) as compared with endoscopist B demonstrating 93.3% overall accuracy (95%CI: 77.9%-99.2%), irrespective of imaging modality. Neither endoscopist demonstrated a significant learning effect with i-SCAN during the study. Though endoscopist A increased accuracy using i-SCAN from 59% (95%CI: 42.1%-74.4%) in the first half to 67.6% (95%CI: 49.5%-82.6%) in the second half, and endoscopist B decreased accuracy using i-SCAN from 100% (95%CI: 80.5%-100.0%) in the first half to 84.6% (95%CI: 54.6%-98.1%) in the second half, neither of these differences were statistically significant. i-SCAN and high-definition white light had similar efficacy predicting polyp histology. Endoscopist training likely plays a critical role in diagnostic test characteristics and deserves further study.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidsmeier, T.; Koehl, R.; Lanham, R.
2008-07-15
The current design and fabrication process for RERTR fuel plates utilizes film radiography during the nondestructive testing and characterization. Digital radiographic methods offer a potential increases in efficiency and accuracy. The traditional and digital radiographic methods are described and demonstrated on a fuel plate constructed with and average of 51% by volume fuel using the dispersion method. Fuel loading data from each method is analyzed and compared to a third baseline method to assess accuracy. The new digital method is shown to be more accurate, save hours of work, and provide additional information not easily available in the traditional method.more » Additional possible improvements suggested by the new digital method are also raised. (author)« less
NASA Astrophysics Data System (ADS)
Tao, Gang; Wei, Guohua; Wang, Xu; Kong, Ming
2018-03-01
There has been increased interest over several decades for applying ground-based synthetic aperture radar (GB-SAR) for monitoring terrain displacement. GB-SAR can achieve multitemporal surface deformation maps of the entire terrain with high spatial resolution and submilimetric accuracy due to the ability of continuous monitoring a certain area day and night regardless of the weather condition. The accuracy of the interferometric measurement result is very important. In this paper, the basic principle of InSAR is expounded, the influence of the platform's instability on the interferometric measurement results are analyzed. The error sources of deformation detection estimation are analyzed using precise geometry of imaging model. Finally, simulation results demonstrates the validity of our analysis.
Feature weighting using particle swarm optimization for learning vector quantization classifier
NASA Astrophysics Data System (ADS)
Dongoran, A.; Rahmadani, S.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses and proposes a method of feature weighting in classification assignments on competitive learning artificial neural network LVQ. The weighting feature method is the search for the weight of an attribute using the PSO so as to give effect to the resulting output. This method is then applied to the LVQ-Classifier and tested on the 3 datasets obtained from the UCI Machine Learning repository. Then an accuracy analysis will be generated by two approaches. The first approach using LVQ1, referred to as LVQ-Classifier and the second approach referred to as PSOFW-LVQ, is a proposed model. The result shows that the PSO algorithm is capable of finding attribute weights that increase LVQ-classifier accuracy.
Mapping broom snakeweed through image analysis of color-infrared photography and digital imagery.
Everitt, J H; Yang, C
2007-11-01
A study was conducted on a south Texas rangeland area to evaluate aerial color-infrared (CIR) photography and CIR digital imagery combined with unsupervised image analysis techniques to map broom snakeweed [Gutierrezia sarothrae (Pursh.) Britt. and Rusby]. Accuracy assessments performed on computer-classified maps of photographic images from two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 88.3%, respectively; whereas, accuracy assessments performed on classified maps from digital images of the same two sites had mean producer's and user's accuracies for broom snakeweed of 98.3 and 92.8%, respectively. These results indicate that CIR photography and CIR digital imagery combined with image analysis techniques can be used successfully to map broom snakeweed infestations on south Texas rangelands.
Katzka, David A; Geno, Debra M; Ravi, Anupama; Smyrk, Thomas C; Lao-Sirieix, Pierre; Miremadi, Ahmed; Miramedi, Ahmed; Debiram, Irene; O'Donovan, Maria; Kita, Hirohito; Kephart, Gail M; Kryzer, Lori A; Camilleri, Michael; Alexander, Jeffrey A; Fitzgerald, Rebecca C
2015-01-01
Management of eosinophilic esophagitis (EoE) requires repeated endoscopic collection of mucosal samples to assess disease activity and response to therapy. An easier and less expensive means of monitoring of EoE is required. We compared the accuracy, safety, and tolerability of sample collection via Cytosponge (an ingestible gelatin capsule comprising compressed mesh attached to a string) with those of endoscopy for assessment of EoE. Esophageal tissues were collected from 20 patients with EoE (all with dysphagia, 15 with stricture, 13 with active EoE) via Cytosponge and then by endoscopy. Number of eosinophils/high-power field and levels of eosinophil-derived neurotoxin were determined; hematoxylin-eosin staining was performed. We compared the adequacy, diagnostic accuracy, safety, and patient preference for sample collection via Cytosponge vs endoscopy procedures. All 20 samples collected by Cytosponge were adequate for analysis. By using a cutoff value of 15 eosinophils/high power field, analysis of samples collected by Cytosponge identified 11 of the 13 individuals with active EoE (83%); additional features such as abscesses were also identified. Numbers of eosinophils in samples collected by Cytosponge correlated with those in samples collected by endoscopy (r = 0.50, P = .025). Analysis of tissues collected by Cytosponge identified 4 of the 7 patients without active EoE (57% specificity), as well as 3 cases of active EoE not identified by analysis of endoscopy samples. Including information on level of eosinophil-derived neurotoxin did not increase the accuracy of diagnosis. No complications occurred during the Cytosponge procedure, which was preferred by all patients, compared with endoscopy. In a feasibility study, the Cytosponge is a safe and well-tolerated method for collecting near mucosal specimens. Analysis of numbers of eosinophils/high-power field identified patients with active EoE with 83% sensitivity. Larger studies are needed to establish the efficacy and safety of this method of esophageal tissue collection. ClinicalTrials.gov number: NCT01585103. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Data extraction for complex meta-analysis (DECiMAL) guide.
Pedder, Hugo; Sarri, Grammati; Keeney, Edna; Nunes, Vanessa; Dias, Sofia
2016-12-13
As more complex meta-analytical techniques such as network and multivariate meta-analyses become increasingly common, further pressures are placed on reviewers to extract data in a systematic and consistent manner. Failing to do this appropriately wastes time, resources and jeopardises accuracy. This guide (data extraction for complex meta-analysis (DECiMAL)) suggests a number of points to consider when collecting data, primarily aimed at systematic reviewers preparing data for meta-analysis. Network meta-analysis (NMA), multiple outcomes analysis and analysis combining different types of data are considered in a manner that can be useful across a range of data collection programmes. The guide has been shown to be both easy to learn and useful in a small pilot study.
Efficient alignment-free DNA barcode analytics
Kuksa, Pavel; Pavlovic, Vladimir
2009-01-01
Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305
Wu, Rongli; Watanabe, Yoshiyuki; Arisawa, Atsuko; Takahashi, Hiroto; Tanaka, Hisashi; Fujimoto, Yasunori; Watabe, Tadashi; Isohashi, Kayako; Hatazawa, Jun; Tomiyama, Noriyuki
2017-10-01
This study aimed to compare the tumor volume definition using conventional magnetic resonance (MR) and 11C-methionine positron emission tomography (MET/PET) images in the differentiation of the pre-operative glioma grade by using whole-tumor histogram analysis of normalized cerebral blood volume (nCBV) maps. Thirty-four patients with histopathologically proven primary brain low-grade gliomas (n = 15) and high-grade gliomas (n = 19) underwent pre-operative or pre-biopsy MET/PET, fluid-attenuated inversion recovery, dynamic susceptibility contrast perfusion-weighted magnetic resonance imaging, and contrast-enhanced T1-weighted at 3.0 T. The histogram distribution derived from the nCBV maps was obtained by co-registering the whole tumor volume delineated on conventional MR or MET/PET images, and eight histogram parameters were assessed. The mean nCBV value had the highest AUC value (0.906) based on MET/PET images. Diagnostic accuracy significantly improved when the tumor volume was measured from MET/PET images compared with conventional MR images for the parameters of mean, 50th, and 75th percentile nCBV value (p = 0.0246, 0.0223, and 0.0150, respectively). Whole-tumor histogram analysis of CBV map provides more valuable histogram parameters and increases diagnostic accuracy in the differentiation of pre-operative cerebral gliomas when the tumor volume is derived from MET/PET images.
Analysis of Elements of the Continuous Monitoring Program
2009-12-01
reasons for differences in financial reporting between the CMP and BOR and provide COMNAVSURFOR the opportunity to increase financial reporting timeliness...accuracy, and completeness of the surface fleet. A methodology was developed to analyze financial reporting within the cruiser and frigate classes...the different groupings. A Beta Test was run on six ships for two months, which tested the recommended alternatives to financial reporting and
Randall A., Jr. Schultz; Thomas C., Jr. Edwards; Gretchen G. Moisen; Tracey S. Frescino
2005-01-01
The ability of USDA Forest Service Forest Inventory and Analysis (FIA) generated spatial products to increase the predictive accuracy of spatially explicit, macroscale habitat models was examined for nest-site selection by cavity-nesting birds in Fishlake National Forest, Utah. One FIA-derived variable (percent basal area of aspen trees) was significant in the habitat...
Low numeracy predicts reduced accuracy of retrospective reports of frequency of sexual behavior.
McAuliffe, Timothy L; DiFranceisco, Wayne; Reed, Barbara R
2010-12-01
Assessment of the frequency of sexual behavior relies on participants' ability to arithmetically aggregate information over time and across partners. This study examines the effect of numeracy (arithmetic skills) on the accuracy of retrospective reports of sexual behavior. For 91 days, the participants completed daily reports about their sexual activity. Participants then completed a survey on sexual behavior over the same period. The discrepancies between the survey-based and the diary-based measures of frequency of vaginal and anal intercourse were evaluated. Multiple regression analysis showed that the discrepancy between retrospective and diary measurements of sexual intercourse increased with lower numeracy (P = 0.026), lower education (P = 0.001), aggregate question format compared to partner-by-partner format (P = 0.031) and higher frequency of intercourse occasions (P < 0.001). Lower numeracy led to a 1.5-fold increase (adjusted mean = 14.1-20.9) in the discrepancy for those using the aggregate question format and a 2.0-fold increase (adjusted mean = 3.7-7.6) for those using the partner-by-partner format.
Li, Youfang; Wang, Yumiao; Zhang, Renzhong; Wang, Jue; Li, Zhiqing; Wang, Ling; Pan, Songfeng; Yang, Yanling; Ma, Yanling; Jia, Manhong
2016-01-01
To understood the accuracy of oral fluid-based rapid HIV self-testing among men who have sex with men (MSM) and related factors. Survey was conducted among MSM selected through non-probability sampling to evaluate the quality of their rapid HIV self-testing, and related information was analyzed. The most MSM were aged 21-30 years (57.0%). Among them, 45.7% had educational level of college or above, 78.5% were unmarried, 59.3% were casual laborers. The overall accuracy rate of oral fluid based self-testing was 95.0%, the handling of"inserting test paper into tube as indicated by arrow on it"had the highest accuracy rate (98.0%), and the handling of"gently upsetting tube for 3 times"had lowest accuracy rate (65.0%); Chi-square analysis showed that educational level, no touch with middle part of test paper, whether reading the instruction carefully, whether understanding the instruction and inserting test paper into tube as indicated by the arrow on it were associated with the accuracy of oral fluid-based rapid HIV self-testing, (P<0.05). Multivariate logistic regression analysis indicated that educational level, no touch with middle part of test paper and understanding instructions were associated with the accuracy of oral fluid-based rapid HIV self-testing. The accuracy of oral fluid-based rapid HIV self-testing was high among MSM, the accuracy varied with the educational level of the MSM. Touch with the middle part of test paper or not and understanding the instructions or not might influence the accuracy of the self-testing.
NASA Technical Reports Server (NTRS)
Gramling, C. J.; Long, A. C.; Lee, T.; Ottenstein, N. A.; Samii, M. V.
1991-01-01
A Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) is currently being developed by NASA to provide a high accuracy autonomous navigation capability for users of TDRSS and its successor, the Advanced TDRSS (ATDRSS). The fully autonomous user onboard navigation system will support orbit determination, time determination, and frequency determination, based on observation of a continuously available, unscheduled navigation beacon signal. A TONS experiment will be performed in conjunction with the Explorer Platform (EP) Extreme Ultraviolet Explorer (EUVE) mission to flight quality TONS Block 1. An overview is presented of TONS and a preliminary analysis of the navigation accuracy anticipated for the TONS experiment. Descriptions of the TONS experiment and the associated navigation objectives, as well as a description of the onboard navigation algorithms, are provided. The accuracy of the selected algorithms is evaluated based on the processing of realistic simulated TDRSS one way forward link Doppler measurements. The analysis process is discussed and the associated navigation accuracy results are presented.
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
An extraction algorithm of pulmonary fissures from multislice CT image
NASA Astrophysics Data System (ADS)
Tachibana, Hiroyuki; Saita, Shinsuke; Yasutomo, Motokatsu; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Sasagawa, Michizo; Eguchi, Kenji; Moriyama, Noriyuki
2005-04-01
Aging and smoking history increases number of pulmonary emphysema. Alveoli restoration destroyed by pulmonary emphysema is difficult and early direction is important. Multi-slice CT technology has been improving 3-D image analysis with higher body axis resolution and shorter scan time. And low-dose high accuracy scanning becomes available. Multi-slice CT image helps physicians with accurate measuring but huge volume of the image data takes time and cost. This paper is intended for computer added emphysema region analysis and proves effectiveness of proposed algorithm.
A quantitative analysis of TIMS data obtained on the Learjet 23 at various altitudes
NASA Technical Reports Server (NTRS)
Jaggi, S.
1992-01-01
A series of Thermal Infrared Multispectral Scanner (TIMS) data acquisition flights were conducted on the NASA Learjet 23 at different altitudes over a test site. The objective was to monitor the performance of the TIMS (its estimation of the brightness temperatures of the ground scene) with increasing altitude. The results do not show any significant correlation between the brightness temperatures and the altitude. The analysis indicates that the estimation of the temperatures is a function of the accuracy of the atmospheric correction used for each altitude.
Analysis of near infrared spectra for age-grading of wild populations of Anopheles gambiae.
Krajacich, Benjamin J; Meyers, Jacob I; Alout, Haoues; Dabiré, Roch K; Dowell, Floyd E; Foy, Brian D
2017-11-07
Understanding the age-structure of mosquito populations, especially malaria vectors such as Anopheles gambiae, is important for assessing the risk of infectious mosquitoes, and how vector control interventions may impact this risk. The use of near-infrared spectroscopy (NIRS) for age-grading has been demonstrated previously on laboratory and semi-field mosquitoes, but to date has not been utilized on wild-caught mosquitoes whose age is externally validated via parity status or parasite infection stage. In this study, we developed regression and classification models using NIRS on datasets of wild An. gambiae (s.l.) reared from larvae collected from the field in Burkina Faso, and two laboratory strains. We compared the accuracy of these models for predicting the ages of wild-caught mosquitoes that had been scored for their parity status as well as for positivity for Plasmodium sporozoites. Regression models utilizing variable selection increased predictive accuracy over the more common full-spectrum partial least squares (PLS) approach for cross-validation of the datasets, validation, and independent test sets. Models produced from datasets that included the greatest range of mosquito samples (i.e. different sampling locations and times) had the highest predictive accuracy on independent testing sets, though overall accuracy on these samples was low. For classification, we found that intramodel accuracy ranged between 73.5-97.0% for grouping of mosquitoes into "early" and "late" age classes, with the highest prediction accuracy found in laboratory colonized mosquitoes. However, this accuracy was decreased on test sets, with the highest classification of an independent set of wild-caught larvae reared to set ages being 69.6%. Variation in NIRS data, likely from dietary, genetic, and other factors limits the accuracy of this technique with wild-caught mosquitoes. Alternative algorithms may help improve prediction accuracy, but care should be taken to either maximize variety in models or minimize confounders.
Diagnostic accuracy of imaging devices in glaucoma: A meta-analysis.
Fallon, Monica; Valero, Oliver; Pazos, Marta; Antón, Alfonso
Imaging devices such as the Heidelberg retinal tomograph-3 (HRT3), scanning laser polarimetry (GDx), and optical coherence tomography (OCT) play an important role in glaucoma diagnosis. A systematic search for evidence-based data was performed for prospective studies evaluating the diagnostic accuracy of HRT3, GDx, and OCT. The diagnostic odds ratio (DOR) was calculated. To compare the accuracy among instruments and parameters, a meta-analysis considering the hierarchical summary receiver-operating characteristic model was performed. The risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Studies in the context of screening programs were used for qualitative analysis. Eighty-six articles were included. The DOR values were 29.5 for OCT, 18.6 for GDx, and 13.9 for HRT. The heterogeneity analysis demonstrated statistically a significant influence of degree of damage and ethnicity. Studies analyzing patients with earlier glaucoma showed poorer results. The risk of bias was high for patient selection. Screening studies showed lower sensitivity values and similar specificity values when compared with those included in the meta-analysis. The classification capabilities of GDx, HRT, and OCT were high and similar across the 3 instruments. The highest estimated DOR was obtained with OCT. Diagnostic accuracy could be overestimated in studies including prediagnosed groups of subjects. Copyright © 2017 Elsevier Inc. All rights reserved.
Design and Theoretical Analysis of a Resonant Sensor for Liquid Density Measurement
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m3. The results also confirm that the method to increase the accuracy of liquid density measurement is feasible. PMID:22969378
Design and theoretical analysis of a resonant sensor for liquid density measurement.
Zheng, Dezhi; Shi, Jiying; Fan, Shangchun
2012-01-01
In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m(3). The results also confirm that the method to increase the accuracy of liquid density measurement is feasible.
Mao, Nini; Liu, Yunting; Chen, Kewei; Yao, Li; Wu, Xia
2018-06-05
Multiple neuroimaging modalities have been developed providing various aspects of information on the human brain. Used together and properly, these complementary multimodal neuroimaging data integrate multisource information which can facilitate a diagnosis and improve the diagnostic accuracy. In this study, 3 types of brain imaging data (sMRI, FDG-PET, and florbetapir-PET) were fused in the hope to improve diagnostic accuracy, and multivariate methods (logistic regression) were applied to these trimodal neuroimaging indices. Then, the receiver-operating characteristic (ROC) method was used to analyze the outcomes of the logistic classifier, with either each index, multiples from each modality, or all indices from all 3 modalities, to investigate their differential abilities to identify the disease. With increasing numbers of indices within each modality and across modalities, the accuracy of identifying Alzheimer disease (AD) increases to varying degrees. For example, the area under the ROC curve is above 0.98 when all the indices from the 3 imaging data types are combined. Using a combination of different indices, the results confirmed the initial hypothesis that different biomarkers were potentially complementary, and thus the conjoint analysis of multiple information from multiple sources would improve the capability to identify diseases such as AD and mild cognitive impairment. © 2018 S. Karger AG, Basel.
Chitinase enzyme activity in CSF is a powerful biomarker of Alzheimer disease.
Watabe-Rudolph, M; Song, Z; Lausser, L; Schnack, C; Begus-Nahrmann, Y; Scheithauer, M-O; Rettinger, G; Otto, M; Tumani, H; Thal, D R; Attems, J; Jellinger, K A; Kestler, H A; von Arnim, C A F; Rudolph, K L
2012-02-21
DNA damage accumulation in brain is associated with the development of Alzheimer disease (AD), but newly identified protein markers of DNA damage have not been evaluated in the diagnosis of AD and other forms of dementia. Here, we analyzed the level of novel biomarkers of DNA damage and telomere dysfunction (chitinase activity, N-acetyl-glucosaminidase activity, stathmin, and EF-1α) in CSF of 94 patients with AD, 41 patients with non-AD dementia, and 40 control patients without dementia. Enzymatic activity of chitinase (chitotriosidase activity) and stathmin protein level were significantly increased in CSF of patients with AD and non-AD dementia compared with that of no dementia control patients. As a single marker, chitinase activity was most powerful for distinguishing patients with AD from no dementia patients with an accuracy of 85.8% using a single threshold. Discrimination was even superior to clinically standard CSF markers that showed an accuracy of 78.4% (β-amyloid) and 77.6% (tau). Combined analysis of chitinase with other markers increased the accuracy to a maximum of 91%. The biomarkers of DNA damage were also increased in CSF of patients with non-AD dementia compared with no dementia patients, and the new biomarkers improved the diagnosis of non-AD dementia as well as the discrimination of AD from non-AD dementia. Taken together, the findings in this study provide experimental evidence that DNA damage markers are significantly increased in AD and non-AD dementia. The biomarkers identified outperformed the standard CSF markers for diagnosing AD and non-AD dementia in the cohort investigated.
ERIC Educational Resources Information Center
Goomas, David T.
2012-01-01
The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…
Orhan, Umut; Erdogmus, Deniz; Roark, Brian; Purwar, Shalini; Hild, Kenneth E.; Oken, Barry; Nezamfar, Hooman; Fried-Oken, Melanie
2013-01-01
Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system. PMID:22255652
Stirling, Paul; Valsalan Mannambeth, Rejith; Soler, Agustin; Batta, Vineet; Malhotra, Rajeev Kumar; Kalairajah, Yegappan
2015-03-18
To summarise and compare currently available evidence regarding accuracy of pre-operative imaging, which is one of the key choices for surgeons contemplating patient-specific instrumentation (PSI) surgery. The MEDLINE and EMBASE medical literature databases were searched, from January 1990 to December 2013, to identify relevant studies. The data from several clinical studies was assimilated to allow appreciation and comparison of the accuracy of each modality. The overall accuracy of each modality was calculated as proportion of outliers > 3% in the coronal plane of both computerised tomography (CT) or magnetic resonance imaging (MRI). Seven clinical studies matched our inclusion criteria for comparison and were included in our study for statistical analysis. Three of these reported series using MRI and four with CT. Overall percentage of outliers > 3% in patients with CT-based PSI systems was 12.5% vs 16.9% for MRI-based systems. These results were not statistically significant. Although many studies have been undertaken to determine the ideal pre-operative imaging modality, conclusions remain speculative in the absence of long term data. Ultimately, information regarding accuracy of CT and MRI will be the main determining factor. Increased accuracy of pre-operative imaging could result in longer-term savings, and reduced accumulated dose of radiation by eliminating the need for post-operative imaging and revision surgery.
Stephan, Peter; Schmid, Christina; Freckmann, Guido; Pleus, Stefan; Haug, Cornelia; Müller, Peter
2015-10-09
The measurement accuracy of systems for self-monitoring of blood glucose (SMBG) is usually analyzed by a method comparison in which the analysis results are displayed using difference plots or similar graphs. However, such plots become difficult to comprehend as the number of data points displayed increases. This article introduces a new approach, the rectangle target plot (RTP), which aims to provide a simplified and comprehensible visualization of accuracy data. The RTP is based on ISO 15197 accuracy evaluations of SMBG systems. Two-sided tolerance intervals for normally distributed data are calculated for absolute and relative differences at glucose concentrations <100 mg/dL and ≥100 mg/dL. These tolerance intervals provide an estimator of where a 90% proportion of results is found with a confidence level of 95%. Plotting these tolerance intervals generates a rectangle whose center indicates the systematic measurement difference of the investigated system relative to the comparison method. The size of the rectangle depends on the measurement variability. The RTP provides a means of displaying measurement accuracy data in a simple and comprehensible manner. The visualization is simplified by reducing the displayed information from typically 200 data points to just 1 rectangle. Furthermore, this allows data for several systems or several lots from 1 system to be displayed clearly and concisely in a single graph. © 2015 Diabetes Technology Society.
Thomas, Cibu; Ye, Frank Q; Irfanoglu, M Okan; Modi, Pooja; Saleem, Kadharbatcha S; Leopold, David A; Pierpaoli, Carlo
2014-11-18
Tractography based on diffusion-weighted MRI (DWI) is widely used for mapping the structural connections of the human brain. Its accuracy is known to be limited by technical factors affecting in vivo data acquisition, such as noise, artifacts, and data undersampling resulting from scan time constraints. It generally is assumed that improvements in data quality and implementation of sophisticated tractography methods will lead to increasingly accurate maps of human anatomical connections. However, assessing the anatomical accuracy of DWI tractography is difficult because of the lack of independent knowledge of the true anatomical connections in humans. Here we investigate the future prospects of DWI-based connectional imaging by applying advanced tractography methods to an ex vivo DWI dataset of the macaque brain. The results of different tractography methods were compared with maps of known axonal projections from previous tracer studies in the macaque. Despite the exceptional quality of the DWI data, none of the methods demonstrated high anatomical accuracy. The methods that showed the highest sensitivity showed the lowest specificity, and vice versa. Additionally, anatomical accuracy was highly dependent upon parameters of the tractography algorithm, with different optimal values for mapping different pathways. These results suggest that there is an inherent limitation in determining long-range anatomical projections based on voxel-averaged estimates of local fiber orientation obtained from DWI data that is unlikely to be overcome by improvements in data acquisition and analysis alone.
Stirling, Paul; Valsalan Mannambeth, Rejith; Soler, Agustin; Batta, Vineet; Malhotra, Rajeev Kumar; Kalairajah, Yegappan
2015-01-01
AIM: To summarise and compare currently available evidence regarding accuracy of pre-operative imaging, which is one of the key choices for surgeons contemplating patient-specific instrumentation (PSI) surgery. METHODS: The MEDLINE and EMBASE medical literature databases were searched, from January 1990 to December 2013, to identify relevant studies. The data from several clinical studies was assimilated to allow appreciation and comparison of the accuracy of each modality. The overall accuracy of each modality was calculated as proportion of outliers > 3% in the coronal plane of both computerised tomography (CT) or magnetic resonance imaging (MRI). RESULTS: Seven clinical studies matched our inclusion criteria for comparison and were included in our study for statistical analysis. Three of these reported series using MRI and four with CT. Overall percentage of outliers > 3% in patients with CT-based PSI systems was 12.5% vs 16.9% for MRI-based systems. These results were not statistically significant. CONCLUSION: Although many studies have been undertaken to determine the ideal pre-operative imaging modality, conclusions remain speculative in the absence of long term data. Ultimately, information regarding accuracy of CT and MRI will be the main determining factor. Increased accuracy of pre-operative imaging could result in longer-term savings, and reduced accumulated dose of radiation by eliminating the need for post-operative imaging and revision surgery. PMID:25793170
Automotive System for Remote Surface Classification.
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-04-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.
Automotive System for Remote Surface Classification
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-01-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
NASA Astrophysics Data System (ADS)
Obuchowski, Nancy A.; Bullen, Jennifer A.
2018-04-01
Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.
6DOF Testing of the SLS Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Geohagan, Kevin; Bernard, Bill; Oliver, T. Emerson; Leggett, Jared; Strickland, Dennis
2018-01-01
The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). Because the navigation architecture for the SLS Block 1 vehicle is a purely inertial system, the accuracy of the achieved orbit relative to mission requirements is very sensitive to initial alignment accuracy. The assessment of this sensitivity and many others via simulation is a part of the SLS Model-Based Design and Model-Based Requirements approach. As a part of the aforementioned, 6DOF Monte Carlo simulation is used in large part to develop and demonstrate verification of program requirements. To facilitate this and the GN&C flight software design process, an SLS-Program-controlled Design Math Model (DMM) of the SLS INS was developed by the SLS Navigation Team. The SLS INS model implements all of the key functions of the hardware-namely, GCA, inertial navigation, and FDIR (Fault Detection, Isolation, and Recovery)-in support of SLS GN&C design requirements verification. Despite the strong sensitivity to initial alignment, GCA accuracy requirements were not verified by test due to program cost and schedule constraints. Instead, the system relies upon assessments performed using the SLS INS model. In order to verify SLS program requirements by analysis, the SLS INS model is verified and validated against flight hardware. In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.
Colon Capsule Endoscopy for the Detection of Colorectal Polyps: An Economic Analysis
Palimaka, Stefan; Blackhouse, Gord; Goeree, Ron
2015-01-01
Background Colorectal cancer is a leading cause of mortality and morbidity in Ontario. Most cases of colorectal cancer are preventable through early diagnosis and the removal of precancerous polyps. Colon capsule endoscopy is a non-invasive test for detecting colorectal polyps. Objectives The objectives of this analysis were to evaluate the cost-effectiveness and the impact on the Ontario health budget of implementing colon capsule endoscopy for detecting advanced colorectal polyps among adult patients who have been referred for computed tomographic (CT) colonography. Methods We performed an original cost-effectiveness analysis to assess the additional cost of CT colonography and colon capsule endoscopy resulting from misdiagnoses. We generated diagnostic accuracy data from a clinical evidence-based analysis (reported separately), and we developed a deterministic Markov model to estimate the additional long-term costs and life-years lost due to false-negative results. We then also performed a budget impact analysis using data from Ontario administrative sources. One-year costs were estimated for CT colonography and colon capsule endoscopy (replacing all CT colonography procedures, and replacing only those CT colonography procedures in patients with an incomplete colonoscopy within the previous year). We conducted this analysis from the payer perspective. Results Using the point estimates of diagnostic accuracy from the head-to-head study between colon capsule endoscopy and CT colonography, we found the additional cost of false-positive results for colon capsule endoscopy to be $0.41 per patient, while additional false-negatives for the CT colonography arm generated an added cost of $116 per patient, with 0.0096 life-years lost per patient due to cancer. This results in an additional cost of $26,750 per life-year gained for colon capsule endoscopy compared with CT colonography. The total 1-year cost to replace all CT colonography procedures with colon capsule endoscopy in Ontario is about $2.72 million; replacing only those CT colonography procedures in patients with an incomplete colonoscopy in the previous year would cost about $740,600 in the first year. Limitations The difference in accuracy between colon capsule endoscopy and CT colonography was not statistically significant for the detection of advanced adenomas (≥ 10 mm in diameter), according to the head-to-head clinical study from which the diagnostic accuracy was taken. This leads to uncertainty in the economic analysis, with results highly sensitive to changes in diagnostic accuracy. Conclusions The cost-effectiveness of colon capsule endoscopy for use in patients referred for CT colonography is $26,750 per life-year, assuming an increased sensitivity of colon capsule endoscopy. Replacement of CT colonography with colon capsule endoscopy is associated with moderate costs to the health care system. PMID:26366240
Flat-Sky Pseudo-Cls Analysis for Weak Gravitational Lensing
NASA Astrophysics Data System (ADS)
Asgari, Marika; Taylor, Andy; Joachimi, Benjamin; Kitching, Thomas D.
2018-05-01
We investigate the use of estimators of weak lensing power spectra based on a flat-sky implementation of the 'Pseudo-CI' (PCl) technique, where the masked shear field is transformed without regard for masked regions of sky. This masking mixes power, and 'E'-convergence and 'B'-modes. To study the accuracy of forward-modelling and full-sky power spectrum recovery we consider both large-area survey geometries, and small-scale masking due to stars and a checkerboard model for field-of-view gaps. The power spectrum for the large-area survey geometry is sparsely-sampled and highly oscillatory, which makes modelling problematic. Instead, we derive an overall calibration for large-area mask bias using simulated fields. The effects of small-area star masks can be accurately corrected for, while the checkerboard mask has oscillatory and spiky behaviour which leads to percent biases. Apodisation of the masked fields leads to increased biases and a loss of information. We find that we can construct an unbiased forward-model of the raw PCls, and recover the full-sky convergence power to within a few percent accuracy for both Gaussian and lognormal-distributed shear fields. Propagating this through to cosmological parameters using a Fisher-Matrix formalism, we find we can make unbiased estimates of parameters for surveys up to 1,200 deg2 with 30 galaxies per arcmin2, beyond which the percent biases become larger than the statistical accuracy. This implies a flat-sky PCl analysis is accurate for current surveys but a Euclid-like survey will require higher accuracy.
Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E
2014-04-01
Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (p<0.0001). Correlation coefficient was significantly lower with cokriging than any other methods (p<0.006) for wells in Texas, east Texas or the Edwards aquifer. Correlation coefficient was significantly greater for wells in southwestern Texas Panhandle than in east Texas, and was higher for wells in Ogallala aquifer than in Edwards aquifer (p<0.0001) regardless of interpolation methods. In regression analysis, the best models are when well depth and/or elevation were entered into the model as covariates regardless of area/aquifer or interpolation methods, and models with IDW are better than kriging in any area/aquifer. In conclusion, the accuracy in estimating groundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
Moura, Renata Vasconcellos; Kojima, Alberto Noriyuki; Saraceni, Cintia Helena Coury; Bassolli, Lucas; Balducci, Ivan; Özcan, Mutlu; Mesquita, Alfredo Mikail Melo
2018-05-01
The increased use of CAD systems can generate doubt about the accuracy of digital impressions for angulated implants. The aim of this study was to evaluate the accuracy of different impression techniques, two conventional and one digital, for implants with and without angulation. We used a polyurethane cast that simulates the human maxilla according to ASTM F1839, and 6 tapered implants were installed with external hexagonal connections to simulate tooth positions 17, 15, 12, 23, 25, and 27. Implants 17 and 23 were placed with 15° of mesial angulation and distal angulation, respectively. Mini cone abutments were installed on these implants with a metal strap 1 mm in height. Conventional and digital impression procedures were performed on the maxillary master cast, and the implants were separated into 6 groups based on the technique used and measurement type: G1 - control, G2 - digital impression, G3 - conventional impression with an open tray, G4 - conventional impression with a closed tray, G5 - conventional impression with an open tray and a digital impression, and G6 - conventional impression with a closed tray and a digital impression. A statistical analysis was performed using two-way repeated measures ANOVA to compare the groups, and a Kruskal-Wallis test was conducted to analyze the accuracy of the techniques. No significant difference in the accuracy of the techniques was observed between the groups. Therefore, no differences were found among the conventional impression and the combination of conventional and digital impressions, and the angulation of the implants did not affect the accuracy of the techniques. All of the techniques exhibited trueness and had acceptable precision. The variation of the angle of the implants did not affect the accuracy of the techniques. © 2018 by the American College of Prosthodontists.
NASA Astrophysics Data System (ADS)
Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet
2017-06-01
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn
NASA Astrophysics Data System (ADS)
Hu, Y.; Ma, Y.; An, J.
2018-04-01
Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.
Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.
Maldonado, G; Greenland, S
1998-07-01
A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.
Pillai, Rekha N; Konje, Justin C; Richardson, Matthew; Tincello, Douglas G; Potdar, Neelam
2018-01-01
Both ultrasound and biochemical markers either alone or in combination have been described in the literature for the prediction of miscarriage. We performed this systematic review and meta-analysis to determine the best combination of biochemical, ultrasound and demographic markers to predict miscarriage in women with viable intrauterine pregnancy. The electronic database search included Medline (1946-June 2017), Embase (1980-June 2017), CINAHL (1981-June 2017) and Cochrane library. Key MESH and Boolean terms were used for the search. Data extraction and collection was performed based on the eligibility criteria by two authors independently. Quality assessment of the individual studies was done using QUADAS 2 (Quality Assessment for Diagnostic Accuracy Studies-2: A Revised Tool) and statistical analysis performed using the Cochrane systematic review manager 5.3 and STATA vs.13.0. Due to the diversity of the combinations used for prediction in the included papers it was not possible to perform a meta-analysis on combination markers. Therefore, we proceeded to perform a meta-analysis on ultrasound markers alone to determine the best marker that can help to improve the diagnostic accuracy of predicting miscarriage in women with viable intrauterine pregnancy. The systematic review identified 18 eligible studies for the quantitative meta-analysis with a total of 5584 women. Among the ultrasound scan markers, fetal bradycardia (n=10 studies, n=1762 women) on hierarchical summary receiver operating characteristic showed sensitivity of 68.41%, specificity of 97.84%, positive likelihood ratio of 31.73 (indicating a large effect on increasing the probability of predicting miscarriage) and negative likelihood ratio of 0.32. In studies for women with threatened miscarriage (n=5 studies, n=771 women) fetal bradycardia showed further increase in sensitivity (84.18%) for miscarriage prediction. Although there is gestational age dependent variation in the fetal heart rate, a plot of fetal heart rate cut off level versus log diagnostic odds ratio showed that at ≤110 beat per minutes the diagnostic power to predict miscarriage is higher. Other markers of intra uterine hematoma, crown rump length and yolk sac had significantly decreased predictive value. Therefore in women with threatened miscarriage and presence of fetal bradycardia on ultrasound scan, there is a role for offering repeat ultrasound scan in a week to ten days interval. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
User Instructions for the Policy Analysis Modeling System (PAMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeil, Michael A.; Letschert, Virginie E.; Van Buskirk, Robert D.
PAMS uses country-specific and product-specific data to calculate estimates of impacts of a Minimum Efficiency Performance Standard (MEPS) program. The analysis tool is self-contained in a Microsoft Excel spreadsheet, and requires no links to external data, or special code additions to run. The analysis can be customized to a particular program without additional user input, through the use of the pull-down menus located on the Summary page. In addition, the spreadsheet contains many areas into which user-generated input data can be entered for increased accuracy of projection. The following is a step-by-step guide for using and customizing the tool.
Texture analysis of pulmonary parenchyma in normal and emphysematous lung
NASA Astrophysics Data System (ADS)
Uppaluri, Renuka; Mitsa, Theophano; Hoffman, Eric A.; McLennan, Geoffrey; Sonka, Milan
1996-04-01
Tissue characterization using texture analysis is gaining increasing importance in medical imaging. We present a completely automated method for discriminating between normal and emphysematous regions from CT images. This method involves extracting seventeen features which are based on statistical, hybrid and fractal texture models. The best subset of features is derived from the training set using the divergence technique. A minimum distance classifier is used to classify the samples into one of the two classes--normal and emphysema. Sensitivity and specificity and accuracy values achieved were 80% or greater in most cases proving that texture analysis holds great promise in identifying emphysema.
Large-baseline InSAR for precise topographic mapping: a framework for TanDEM-X large-baseline data
NASA Astrophysics Data System (ADS)
Pinheiro, Muriel; Reigber, Andreas; Moreira, Alberto
2017-09-01
The global Digital Elevation Model (DEM) resulting from the TanDEM-X mission provides information about the world topography with outstanding precision. In fact, performance analysis carried out with the already available data have shown that the global product is well within the requirements of 10 m absolute vertical accuracy and 2 m relative vertical accuracy for flat to moderate terrain. The mission's science phase took place from October 2014 to December 2015. During this phase, bistatic acquisitions with across-track separation between the two satellites up to 3.6 km at the equator were commanded. Since the relative vertical accuracy of InSAR derived elevation models is, in principle, inversely proportional to the system baseline, the TanDEM-X science phase opened the doors for the generation of elevation models with improved quality with respect to the standard product. However, the interferometric processing of the large-baseline data is troublesome due to the increased volume decorrelation and very high frequency of the phase variations. Hence, in order to fully profit from the increased baseline, sophisticated algorithms for the interferometric processing, and, in particular, for the phase unwrapping have to be considered. This paper proposes a novel dual-baseline region-growing framework for the phase unwrapping of the large-baseline interferograms. Results from two experiments with data from the TanDEM-X science phase are discussed, corroborating the expected increased level of detail of the large-baseline DEMs.
Gebker, Rolf; Mirelis, Jesus G; Jahnke, Cosima; Hucko, Thomas; Manka, Robert; Hamdan, Ashraf; Schnackenburg, Bernhard; Fleck, Eckart; Paetsch, Ingo
2010-09-01
The purpose of this study was to determine the influence of left ventricular (LV) hypertrophy and geometry on the diagnostic accuracy of wall motion and additional perfusion imaging during high-dose dobutamine/atropine stress magnetic resonance for the detection of coronary artery disease. Combined dobutamine stress magnetic resonance (DSMR)-wall motion and DSMR-perfusion imaging was performed in a single session in 187 patients scheduled for invasive coronary angiography. Patients were classified into 4 categories on the basis of LV mass (normal, ≤ 81 g/m(2) in men and ≤ 62 g/m(2) in women) and relative wall thickness (RWT) (normal, <0.45) as follows: normal geometry (normal mass, normal RWT), concentric remodeling (normal mass, increased RWT), concentric hypertrophy (increased mass, increased RWT), and eccentric hypertrophy (increased mass, normal RWT). Wall motion and perfusion images were interpreted sequentially, with observers blinded to other data. Significant coronary artery disease was defined as ≥ 70% stenosis. In patients with increased LV concentricity (defined by an RWT ≥ 0.45), sensitivity and accuracy of DSMR-wall motion were significantly reduced (63% and 73%, respectively; P<0.05) compared with patients without increased LV concentricity (90% and 88%, respectively; P<0.05). Although accuracy of DSMR-perfusion was higher than that of DSMR-wall motion in patients with concentric hypertrophy (82% versus 71%; P < 0.05), accuracy of DSMR-wall motion was superior to DSMR-perfusion (90% versus 85%; P < 0.05) in patients with eccentric hypertrophy. The accuracy of DSMR-wall motion is influenced by LV geometry. In patients with concentric remodeling and concentric hypertrophy, additional first-pass perfusion imaging during high-dose dobutamine stress improves the diagnostic accuracy for the detection of coronary artery disease.
Confined turbulent swirling recirculating flow predictions. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Abujelala, M. T.; Lilley, D. G.
1985-01-01
The capability and the accuracy of the STARPIC computer code in predicting confined turbulent swirling recirculating flows is presented. Inlet flow boundary conditions were demonstrated to be extremely important in simulating a flowfield via numerical calculations. The degree of swirl strength and expansion ratio have strong effects on the characteristics of swirling flow. In a nonswirling flow, a large corner recirculation zone exists in the flowfield with an expansion ratio greater than one. However, as the degree of inlet swirl increases, the size of this zone decreases and a central recirculation zone appears near the inlet. Generally, the size of the central zone increased with swirl strength and expansion ratio. Neither the standard k-epsilon turbulence mode nor its previous extensions show effective capability for predicting confined turbulent swirling recirculating flows. However, either reduced optimum values of three parameters in the mode or the empirical C sub mu formulation obtained via careful analysis of available turbulence measurements, can provide more acceptable accuracy in the prediction of these swirling flows.
NASA Astrophysics Data System (ADS)
Tan, C. J.; Aslian, A.; Honarvar, B.; Puborlaksono, J.; Yau, Y. H.; Chong, W. T.
2015-12-01
We constructed an FE axisymmetric model to simulate the effect of partially hardened blanks on increasing the limiting drawing ratio (LDR) of cylindrical cups. We partitioned an arc-shaped hard layer into the cross section of a DP590 blank. We assumed the mechanical property of the layer is equivalent to either DP980 or DP780. We verified the accuracy of the model by comparing the calculated LDR for DP590 with the one reported in the literature. The LDR for the partially hardened blank increased from 2.11 to 2.50 with a 1 mm depth of DP980 ring-shaped hard layer on the top surface of the blank. The position of the layer changed with drawing ratios. We proposed equations for estimating the inner and outer diameters of the layer, and tested its accuracy in the simulation. Although the outer diameters fitted in well with the estimated line, the inner diameters are slightly less than the estimated ones.
Hyperspectral imaging with wavelet transform for classification of colon tissue biopsy samples
NASA Astrophysics Data System (ADS)
Masood, Khalid
2008-08-01
Automatic classification of medical images is a part of our computerised medical imaging programme to support the pathologists in their diagnosis. Hyperspectral data has found its applications in medical imagery. Its usage is increasing significantly in biopsy analysis of medical images. In this paper, we present a histopathological analysis for the classification of colon biopsy samples into benign and malignant classes. The proposed study is based on comparison between 3D spectral/spatial analysis and 2D spatial analysis. Wavelet textural features in the wavelet domain are used in both these approaches for classification of colon biopsy samples. Experimental results indicate that the incorporation of wavelet textural features using a support vector machine, in 2D spatial analysis, achieve best classification accuracy.
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
NASA Technical Reports Server (NTRS)
Patt, Frederick S.; Hoisington, Charles M.; Gregg, Watson W.; Coronado, Patrick L.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Indest, A. W. (Editor)
1993-01-01
An analysis of orbit propagation models was performed by the Mission Operations element of the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Project, which has overall responsibility for the instrument scheduling. The orbit propagators selected for this analysis are widely available general perturbations models. The analysis includes both absolute accuracy determination and comparisons of different versions of the models. The results show that all of the models tested meet accuracy requirements for scheduling and data acquisition purposes. For internal Project use the SGP4 propagator, developed by the North American Air Defense (NORAD) Command, has been selected. This model includes atmospheric drag effects and, therefore, provides better accuracy. For High Resolution Picture Transmission (HRPT) ground stations, which have less stringent accuracy requirements, the publicly available Brouwer-Lyddane models are recommended. The SeaWiFS Project will make available portable source code for a version of this model developed by the Data Capture Facility (DCF).
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
NASA Astrophysics Data System (ADS)
Nguyen, Thien; Ahn, Sangtae; Jang, Hyojung; Jun, Sung C.; Kim, Jae G.
2016-03-01
Driver's condition plays a critical role in driving safety. The fact that about 20 percent of automobile accidents occurred due to driver fatigue leads to a demand for developing a method to monitor driver's status. In this study, we acquired brain signals such as oxy- and deoxyhemoglobin and neuronal electrical activity by a hybrid fNIRS/EEG system. Experiments were conducted with 11 subjects under two conditions: Normal condition, when subjects had enough sleep, and sleep deprivation condition, when subject did not sleep previous night. During experiment, subject performed a driving task with a car simulation system for 30 minutes. After experiment, oxy-hemoglobin and deoxy-hemoglobin changes were derived from fNIRS data, while beta and alpha band relative power were calculated from EEG data. Decrement of oxy-hemoglobin, beta band power, and increment of alpha band power were found in sleep deprivation condition compare to normal condition. These features were then applied to classify two conditions by Fisher's linear discriminant analysis (FLDA). The ratio of alpha-beta relative power showed classification accuracy with a range between 62% and 99% depending on a subject. However, utilization of both EEG and fNIRS features increased accuracy in the range between 68% and 100%. The highest increase of accuracy is from 63% using EEG to 99% using both EEG and fNIRS features. In conclusion, the enhancement of classification accuracy is shown by adding a feature from fNIRS to the feature from EEG using FLDA which provides the need of developing a hybrid fNIRS/EEG system.
Use of a control film piece in radiochromic film dosimetry.
Aldelaijan, Saad; Alzorkany, Faisal; Moftah, Belal; Buzurovic, Ivan; Seuntjens, Jan; Tomic, Nada; Devic, Slobodan
2016-01-01
Radiochromic films change their color upon irradiation due to polymerization of the sensitive component embedded within the sensitive layer. However, agents, other than monitored radiation, can lead to a change in the color of the sensitive layer (temperature, humidity, UV light) that can be considered as a background signal and can be removed from the actual measurement by using a control film piece. In this work, we investigate the impact of the use of control film pieces on both accuracy and uncertainty of dose measured using radiochromic film based reference dosimetry protocol. We irradiated "control" film pieces (EBT3 GafChromic(TM) film model) to known doses in a range of 0.05-1 Gy, and five film pieces of the same size to 2, 5, 10, 15 and 20 Gy, considered to be "unknown" doses. Depending on a dose range, two approaches to incorporating control film piece were investigated: signal and dose corrected method. For dose values greater than 10 Gy, the increase in accuracy of 3% led to uncertainty loss of 5% by using dose corrected approach. At lower doses and signals of the order of 5%, we observed an increase in accuracy of 10% with a loss of uncertainty lower than 1% by using the corrected signal approach. Incorporation of the signal registered by the control film piece into dose measurement analysis should be a judgment call of the user based on a tradeoff between deemed accuracy and acceptable uncertainty for a given dose measurement. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES
Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...
Liu, Dan; Xie, Lixin; Zhao, Haiyan; Liu, Xueyao; Cao, Jie
2016-05-26
The early identification of patients at risk of dying from community-acquired pneumonia (CAP) is critical for their treatment and for defining hospital resource consumption. Mid-regional pro-adrenomedullin (MR-proADM) has been extensively investigated for its prognostic value in CAP. However, the results are conflicting. The purpose of the present meta-analysis was to explore the diagnostic accuracy of MR-proADM for predicting mortality in patients suffering from CAP, particularly emergency department (ED) patients. We systematically searched the PubMed, Embase, Web of Knowledge and Cochrane databases. Studies were included if a 2 × 2 contingency table could be constructed based on both the MR-proADM level and the complications or mortality of patients diagnosed with CAP. The prognostic accuracy of MR-proADM in CAP was assessed using the bivariate meta-analysis model. We used the Q-test and I (2) index to evaluate heterogeneity. MR-proADM displayed moderate diagnostic accuracy for predicting complications in CAP, with an overall area under the SROC curve (AUC) of 0.74 (95 % CI: 0.70-0.78). Eight studies with a total of 4119 patients in the emergency department (ED) were included. An elevated MR-proADM level was associated with increased risk of death from CAP (RR 6.16, 95 % CI 4.71-8.06); the I (2) value was 0.0 %, and a fixed-effects model was used to pool RR. The pooled sensitivity and specificity were 0.74 (95 % CI: 0.67-0.79) and 0.73 (95 % CI: 0.70-0.77), respectively. The positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were 2.8 (95 % CI, 2.3-3.3) and 0.36 (95 % CI, 0.29-0.45), respectively. In addition, the diagnostic odds ratio (DOR) was 8 (95 % CI, 5-11), and the overall area under the SROC curve was 0.76 (95 % CI, 0.72-0.80). Our study has demonstrated that MR-proADM is predictive of increased complications and higher mortality rates in patients suffering from CAP. Future studies are warranted to determine the prognostic accuracy of MR-proADM in conjunction with severity scores or other biomarkers and to determine an optimal cut-off level.
Hirose, Tomohiro; Nitta, Norihisa; Shiraishi, Junji; Nagatani, Yukihiro; Takahashi, Masashi; Murata, Kiyoshi
2008-12-01
The aim of this study was to evaluate the usefulness of computer-aided diagnosis (CAD) software for the detection of lung nodules on multidetector-row computed tomography (MDCT) in terms of improvement in radiologists' diagnostic accuracy in detecting lung nodules, using jackknife free-response receiver-operating characteristic (JAFROC) analysis. Twenty-one patients (6 without and 15 with lung nodules) were selected randomly from 120 consecutive thoracic computed tomographic examinations. The gold standard for the presence or absence of nodules in the observer study was determined by consensus of two radiologists. Six expert radiologists participated in a free-response receiver operating characteristic study for the detection of lung nodules on MDCT, in which cases were interpreted first without and then with the output of CAD software. Radiologists were asked to indicate the locations of lung nodule candidates on the monitor with their confidence ratings for the presence of lung nodules. The performance of the CAD software indicated that the sensitivity in detecting lung nodules was 71.4%, with 0.95 false-positive results per case. When radiologists used the CAD software, the average sensitivity improved from 39.5% to 81.0%, with an increase in the average number of false-positive results from 0.14 to 0.89 per case. The average figure-of-merit values for the six radiologists were 0.390 without and 0.845 with the output of the CAD software, and there was a statistically significant difference (P < .0001) using the JAFROC analysis. The CAD software for the detection of lung nodules on MDCT has the potential to assist radiologists by increasing their accuracy.
Heo, Jeong; Baek, Hyun Jae; Hong, Seunghyeok; Chang, Min Hye; Lee, Jeong Su; Park, Kwang Suk
2017-05-01
Patients with total locked-in syndrome are conscious; however, they cannot express themselves because most of their voluntary muscles are paralyzed, and many of these patients have lost their eyesight. To improve the quality of life of these patients, there is an increasing need for communication-supporting technologies that leverage the remaining senses of the patient along with physiological signals. The auditory steady-state response (ASSR) is an electro-physiologic response to auditory stimulation that is amplitude-modulated by a specific frequency. By leveraging the phenomenon whereby ASSR is modulated by mind concentration, a brain-computer interface paradigm was proposed to classify the selective attention of the patient. In this paper, we propose an auditory stimulation method to minimize auditory stress by replacing the monotone carrier with familiar music and natural sounds for an ergonomic system. Piano and violin instrumentals were employed in the music sessions; the sounds of water streaming and cicadas singing were used in the natural sound sessions. Six healthy subjects participated in the experiment. Electroencephalograms were recorded using four electrodes (Cz, Oz, T7 and T8). Seven sessions were performed using different stimuli. The spectral power at 38 and 42Hz and their ratio for each electrode were extracted as features. Linear discriminant analysis was utilized to classify the selections for each subject. In offline analysis, the average classification accuracies with a modulation index of 1.0 were 89.67% and 87.67% using music and natural sounds, respectively. In online experiments, the average classification accuracies were 88.3% and 80.0% using music and natural sounds, respectively. Using the proposed method, we obtained significantly higher user-acceptance scores, while maintaining a high average classification accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Bayesian approach to the creation of a study-customized neonatal brain atlas
Zhang, Yajing; Chang, Linda; Ceritoglu, Can; Skranes, Jon; Ernst, Thomas; Mori, Susumu; Miller, Michael I.; Oishi, Kenichi
2014-01-01
Atlas-based image analysis (ABA), in which an anatomical “parcellation map” is used for parcel-by-parcel image quantification, is widely used to analyze anatomical and functional changes related to brain development, aging, and various diseases. The parcellation maps are often created based on common MRI templates, which allow users to transform the template to target images, or vice versa, to perform parcel-by-parcel statistics, and report the scientific findings based on common anatomical parcels. The use of a study-specific template, which represents the anatomical features of the study population better than common templates, is preferable for accurate anatomical labeling; however, the creation of a parcellation map for a study-specific template is extremely labor intensive, and the definitions of anatomical boundaries are not necessarily compatible with those of the common template. In this study, we employed a Volume-based Template Estimation (VTE) method to create a neonatal brain template customized to a study population, while keeping the anatomical parcellation identical to that of a common MRI atlas. The VTE was used to morph the standardized parcellation map of the JHU-neonate-SS atlas to capture the anatomical features of a study population. The resultant “study-customized” T1-weighted and diffusion tensor imaging (DTI) template, with three-dimensional anatomical parcellation that defined 122 brain regions, was compared with the JHU-neonate-SS atlas, in terms of the registration accuracy. A pronounced increase in the accuracy of cortical parcellation and superior tensor alignment were observed when the customized template was used. With the customized atlas-based analysis, the fractional anisotropy (FA) detected closely approximated the manual measurements. This tool provides a solution for achieving normalization-based measurements with increased accuracy, while reporting scientific findings in a consistent framework. PMID:25026155
Sabr, Abutaleb; Moeinaddini, Mazaher; Azarnivand, Hossein; Guinot, Benjamin
2016-12-01
In the recent years, dust storms originating from local abandoned agricultural lands have increasingly impacted Tehran and Karaj air quality. Designing and implementing mitigation plans are necessary to study land use/land cover change (LUCC). Land use/cover classification is particularly relevant in arid areas. This study aimed to map land use/cover by pixel- and object-based image classification methods, analyse landscape fragmentation and determine the effects of two different classification methods on landscape metrics. The same sets of ground data were used for both classification methods. Because accuracy of classification plays a key role in better understanding LUCC, both methods were employed. Land use/cover maps of the southwest area of Tehran city for the years 1985, 2000 and 2014 were obtained from Landsat digital images and classified into three categories: built-up, agricultural and barren lands. The results of our LUCC analysis showed that the most important changes in built-up agricultural land categories were observed in zone B (Shahriar, Robat Karim and Eslamshahr) between 1985 and 2014. The landscape metrics obtained for all categories pictured high landscape fragmentation in the study area. Despite no significant difference was evidenced between the two classification methods, the object-based classification led to an overall higher accuracy than using the pixel-based classification. In particular, the accuracy of the built-up category showed a marked increase. In addition, both methods showed similar trends in fragmentation metrics. One of the reasons is that the object-based classification is able to identify buildings, impervious surface and roads in dense urban areas, which produced more accurate maps.
Karamitsos, Theodoros D; Hudsmith, Lucy E; Selvanayagam, Joseph B; Neubauer, Stefan; Francis, Jane M
2007-01-01
Accurate and reproducible measurement of left ventricular (LV) mass and function is a significant strength of Cardiovascular Magnetic Resonance (CMR). Reproducibility and accuracy of these measurements is usually reported between experienced operators. However, an increasing number of inexperienced operators are now training in CMR and are involved in post-processing analysis. The aim of the study was to assess the interobserver variability of the manual planimetry of LV contours amongst two experienced and six inexperienced operators before and after a two months training period. Ten healthy normal volunteers (5 men, mean age 34+/-14 years) comprised the study population. LV volumes, mass, and ejection fraction were manually evaluated using Argus software (Siemens Medical Solutions, Erlangen, Germany) for each subject, once by the two experienced and twice by the six inexperienced operators. The mean values of experienced operators were considered the reference values. The agreement between operators was evaluated by means of Bland-Altman analysis. Training involved standardized data acquisition, simulated off-line analysis and mentoring. The trainee operators demonstrated improvement in the measurement of all the parameters compared to the experienced operators. The mean ejection fraction variability improved from 7.2% before training to 3.7% after training (p=0.03). The parameter in which the trainees showed the least improvement was LV mass (from 7.7% to 6.7% after training). The basal slice selection and contour definition were the main sources of errors. An intensive two month training period significantly improved the accuracy of LV functional measurements. Adequate training of new CMR operators is of paramount importance in our aim to maintain the accuracy and high reproducibility of CMR in LV function analysis.
NASA Astrophysics Data System (ADS)
Hamar, D.; Ferencz, Cs.; Steinbach, P.; Lichtenberger, J.; Ferencz, O. E.; Parrot, M.
2009-04-01
Examining the mechanism and effect of the coupling of the electromagnetic signals from the lower ionosphere into the Earth-ionosphere waveguide (EIWG) can be maintained with the analysis of simultaneous broadband VLF recordings acquired at ground station (Tihany, Hungary) and on LEO orbiting satellite (DEMETER) during nearby passes. Single hop whistlers, selected from concurrent broadband VLF data sets were analyzed with high accuracy applying the matched filtering (MF) technique, developed previously for signal analysis. The accuracy of the frequency-time-amplitude pattern and the resolution of the closely spaced whistler traces were further increased with least-square estimation of the parameters of the output of MF procedure. One result of this analysis is the fine structure of the whistler which can not be recognized in conventional spectrogram. The comparison of the detailed fine structure of the whistlers measured on board and on the ground enabled us to select reliably the corresponding signal pairs. The remarkable difference seen in the fine structure of matching whistler occurrences in the satellite and the ground data series can be addressed e.g. to the effect of the inhomogeneous ionospheric plasma (trans-ionosperic impulse propagation) or the process of wave energy leaking out from the ionized medium into the EIWG. This field needs further investigations. References: Ferencz Cs., Ferencz O. E., Hamar D. and Lichtenberger, J., (2001) Whistler Phenomena, Short Impulse Propagation; Kluwer Academic Publisher, ISBN 0-7923-6995-5, Netherlands Lichtenberger, J., Hamar D. and Ferencz Cs.,(2003) Methods for analyzing the structure and propagation characteristics of whistlers, in: Very Low Frequency (VLF) Phenomena, Narosa Publishing House, New Delhi, p. 88-107.
NASA Astrophysics Data System (ADS)
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence time in PPP static and kinematic solutions compared to GPS-only PPP solutions for various observational session durations. However, this is mostly observed when the visibility of Galileo and BeiDou satellites is substantially long within an observational session. In GPS-only cases dealing with data from high elevation cut-off angles, the number of GPS satellites decreases dramatically, leading to a position accuracy and convergence time deviating from satisfactory geodetic thresholds. By contrast, respective multi-GNSS PPP solutions not only show improvement, but also lead to geodetic level accuracies even in 30° elevation cut-off. Finally, the GPS ambiguity resolution in PPP processing is investigated using the GPS satellite wide-lane fractional cycle biases, which are included in the clock products by CNES. It is shown that their addition shortens the convergence time and increases the position accuracy of PPP solutions, especially in kinematic mode. Analogous improvement is obtained in respective multi-GNSS solutions, even though the GLONASS, Galileo and BeiDou ambiguities remain float, since information about them is not provided in the clock products available to date.
Dhingsa, Rajpal; Qayyum, Aliya; Coakley, Fergus V; Lu, Ying; Jones, Kirk D; Swanson, Mark G; Carroll, Peter R; Hricak, Hedvig; Kurhanewicz, John
2004-01-01
To determine the effect of digital rectal examination findings, sextant biopsy results, and prostate-specific antigen (PSA) levels on reader accuracy in the localization of prostate cancer with endorectal magnetic resonance (MR) imaging and MR spectroscopic imaging. This was a retrospective study of 37 patients (mean age, 57 years) with biopsy-proved prostate cancer. Transverse T1-weighted, transverse high-spatial-resolution, and coronal T2-weighted MR images and MR spectroscopic images were obtained. Two independent readers, unaware of clinical data, recorded the size and location of suspicious peripheral zone tumor nodules on a standardized diagram of the prostate. Readers also recorded their degree of diagnostic confidence for each nodule on a five-point scale. Both readers repeated this interpretation with knowledge of rectal examination findings, sextant biopsy results, and PSA level. Step-section histopathologic findings were the reference standard. Logistic regression analysis with generalized estimating equations was used to correlate tumor detection with clinical data, and alternative free-response receiver operating characteristic (AFROC) curve analysis was used to examine the overall effect of clinical data on all positive results. Fifty-one peripheral zone tumor nodules were identified at histopathologic evaluation. Logistic regression analysis showed awareness of clinical data significantly improved tumor detection rate (P <.02) from 15 to 19 nodules for reader 1 and from 13 to 19 nodules for reader 2 (27%-37% overall) by using both size and location criteria. AFROC analysis showed no significant change in overall reader performance because there was an associated increase in the number of false-positive findings with awareness of clinical data, from 11 to 21 for reader 1 and from 16 to 25 for reader 2. Awareness of clinical data significantly improves reader detection of prostate cancer nodules with endorectal MR imaging and MR spectroscopic imaging, but there is no overall change in reader accuracy, because of an associated increase in false-positive findings. A stricter definition of a true-positive result is associated with reduced sensitivity for prostate cancer nodule detection. Copyright RSNA, 2004
3D micro-mapping: Towards assessing the quality of crowdsourcing to support 3D point cloud analysis
NASA Astrophysics Data System (ADS)
Herfort, Benjamin; Höfle, Bernhard; Klonner, Carolin
2018-03-01
In this paper, we propose a method to crowdsource the task of complex three-dimensional information extraction from 3D point clouds. We design web-based 3D micro tasks tailored to assess segmented LiDAR point clouds of urban trees and investigate the quality of the approach in an empirical user study. Our results for three different experiments with increasing complexity indicate that a single crowdsourcing task can be solved in a very short time of less than five seconds on average. Furthermore, the results of our empirical case study reveal that the accuracy, sensitivity and precision of 3D crowdsourcing are high for most information extraction problems. For our first experiment (binary classification with single answer) we obtain an accuracy of 91%, a sensitivity of 95% and a precision of 92%. For the more complex tasks of the second Experiment 2 (multiple answer classification) the accuracy ranges from 65% to 99% depending on the label class. Regarding the third experiment - the determination of the crown base height of individual trees - our study highlights that crowdsourcing can be a tool to obtain values with even higher accuracy in comparison to an automated computer-based approach. Finally, we found out that the accuracy of the crowdsourced results for all experiments is hardly influenced by characteristics of the input point cloud data and of the users. Importantly, the results' accuracy can be estimated using agreement among volunteers as an intrinsic indicator, which makes a broad application of 3D micro-mapping very promising.
Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras
NASA Astrophysics Data System (ADS)
Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph
2016-06-01
Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.
Subtle Cognitive Effects of Moderate Hypoxia
2009-08-01
using SPSS® 13.0 with significance set at an alpha level of .05 for all statistical tests. A repeated measures analysis of variance (ANOVA) was...there was not statistically significant change in reaction time (p=.781), accuracy (p=.152), or throughout (p=.967) with increasing altitude. The...results indicate that healthy individuals aged 19 to 45 years do not experience significant cognitive deficit, as measured by the CogScreen®-HE, when
2009-06-12
Mission Debrief Report ................................157 Figure 41. Flow Chart of Film from Collection to Division...reconnaissance increased speed, but the limited analysis sacrificed accuracy and detail, and caused intelligence personnel and commanders dilemmas...targets or larger areas for general familiarization or map-making. This detail had a price: timeliness. Early in the war, film development and print
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
Polished sample preparing and backscattered electron imaging and of fly ash-cement paste
NASA Astrophysics Data System (ADS)
Feng, Shuxia; Li, Yanqi
2018-03-01
In recent decades, the technology of backscattered electron imaging and image analysis was applied in more and more study of mixed cement paste because of its special advantages. Test accuracy of this technology is affected by polished sample preparation and image acquisition. In our work, effects of two factors in polished sample preparing and backscattered electron imaging were investigated. The results showed that increasing smoothing pressure could improve the flatness of polished surface and then help to eliminate interference of morphology on grey level distribution of backscattered electron images; increasing accelerating voltage was beneficial to increase gray difference among different phases in backscattered electron images.
Analysis of recreational land using Skylab data. [Gratiot-Saginaw State Game Area, Michigan
NASA Technical Reports Server (NTRS)
Sattinger, I. J. (Principal Investigator); Sadowski, F. G.; Roller, N. E. G.
1976-01-01
The author has identified the following significant results. S192 data collected on 5 August 1973 were processed by computer to produce a classification map of a part of the Gratiot-Saginaw State Game Area in south central Michigan. A 10-category map was prepared of an area consisting of diverse terrain types, including forests, wetlands, brush, and herbaceous vegetation. An accuracy check indicated that 54% of the pixels were correctly recognized. When these ten scene classes were consolidated to a 5-category map, the accuracy increased to 72%. S190 A, S190 B, and S192 data can be used for regional surveys of existing and potential recreation sites, for delineation of open space, and for preliminary evaluation of geographically extensive sites.
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1974-01-01
A prototype of a semi-real time system for synchronizing the Deep Space Net station clocks by radio interferometry was successfully demonstrated on August 30, 1972. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time sync estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 ns rms were achieved between Deep Space Stations 11 and 12, both at Goldstone, Calif. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to baseline and source position uncertainties and atmospheric effects are reached. These limitations are under 10 ns for transcontinental baselines.
Automatic Semantic Orientation of Adjectives for Indonesian Language Using PMI-IR and Clustering
NASA Astrophysics Data System (ADS)
Riyanti, Dewi; Arif Bijaksana, M.; Adiwijaya
2018-03-01
We present our work in the area of sentiment analysis for Indonesian language. We focus on bulding automatic semantic orientation using available resources in Indonesian. In this research we used Indonesian corpus that contains 9 million words from kompas.txt and tempo.txt that manually tagged and annotated with of part-of-speech tagset. And then we construct a dataset by taking all the adjectives from the corpus, removing the adjective with no orientation. The set contained 923 adjective words. This systems will include several steps such as text pre-processing and clustering. The text pre-processing aims to increase the accuracy. And finally clustering method will classify each word to related sentiment which is positive or negative. With improvements to the text preprocessing, can be achieved 72% of accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tennenberg, S.D.; Jacobs, M.P.; Solomkin, J.S.
1987-04-01
Two methods for predicting adult respiratory distress syndrome (ARDS) were evaluated prospectively in a group of 81 multitrauma and sepsis patients considered at clinical high risk. A popular ARDS risk-scoring method, employing discriminant analysis equations (weighted risk criteria and oxygenation characteristics), yielded a predictive accuracy of 59% and a false-negative rate of 22%. Pulmonary alveolar-capillary permeability (PACP) was determined with a radioaerosol lung-scan technique in 23 of these 81 patients, representing a statistically similar subgroup. Lung scanning achieved a predictive accuracy of 71% (after excluding patients with unilateral pulmonary contusion) and gave no false-negatives. We propose a combination of clinicalmore » risk identification and functional determination of PACP to assess a patient's risk of developing ARDS.« less
Revisiting measuring colour gamut of the color-reproducing system: interpretation aspects
NASA Astrophysics Data System (ADS)
Sysuev, I. A.; Varepo, L. G.; Trapeznikova, O. V.
2018-04-01
According to the ISO standard, the color gamut body volume is used to evaluate the color reproduction quality. The specified volume describes the number of colors that are in a certain area of the color space. There are ways for evaluating the reproduction quality of a multi-colour image using numerical integration methods, but this approach does not provide high accuracy of the analysis. In this connection, the task of increasing the accuracy of the color reproduction evaluation is still relevant. In order to determine the color mass of a color space area, it is suggested to select the necessary color density values from a map corresponding to a given degree of sampling, excluding its mathematical calculations, which reflects the practical significance and novelty of this solution.
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D.
2015-01-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset towards longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. PMID:25895910
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D
2015-10-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset toward longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. © The Author(s) 2015.
Delagrange, Sylvain; Rochon, Pascal
2011-10-01
To meet the increasing need for rapid and non-destructive extraction of canopy traits, two methods were used and compared with regard to their accuracy in estimating 2-D and 3-D parameters of a hybrid poplar sapling. The first method consisted of the analysis of high definition photographs in Tree Analyser (TA) software (PIAF-INRA/Kasetsart University). TA allowed the extraction of individual traits using a space carving approach. The second method utilized 3-D point clouds acquired from terrestrial light detection and ranging (T-LiDAR) scans. T-LiDAR scans were performed on trees without leaves to reconstruct the lignified structure of the sapling. From this skeleton, foliage was added using simple modelling rules extrapolated from field measurements. Validation of the estimated dimension and the accuracy of reconstruction was then achieved by comparison with an empirical data set. TA was found to be slightly less precise than T-LiDAR for estimating tree height, canopy height and mean canopy diameter, but for 2-D traits both methods were, however, fully satisfactory. TA tended to over-estimate total leaf area (error up to 50 %), but better estimates were obtained by reducing the size of the voxels used for calculations. In contrast, T-LiDAR estimated total leaf area with an error of <6 %. Finally, both methods led to an over-estimation of canopy volume. With respect to this trait, T-LiDAR (14·5 % deviation) greatly surpassed the accuracy of TA (up to 50 % deviation), even if the voxels used were reduced in size. Taking into account their magnitude of data acquisition and analysis and their accuracy in trait estimations, both methods showed contrasting potential future uses. Specifically, T-LiDAR is a particularly promising tool for investigating the development of large perennial plants, by itself or in association with plant modelling.
Throwing speed and accuracy in baseball and cricket players.
Freeston, Jonathan; Rooney, Kieron
2014-06-01
Throwing speed and accuracy are both critical to sports performance but cannot be optimized simultaneously. This speed-accuracy trade-off (SATO) is evident across a number of throwing groups but remains poorly understood. The goal was to describe the SATO in baseball and cricket players and determine the speed that optimizes accuracy. 20 grade-level baseball and cricket players performed 10 throws at 80% and 100% of maximal throwing speed (MTS) toward a cricket stump. Baseball players then performed a further 10 throws at 70%, 80%, 90%, and 100% of MTS toward a circular target. Baseball players threw faster with greater accuracy than cricket players at both speeds. Both groups demonstrated a significant SATO as vertical error increased with increases in speed; the trade-off was worse for cricketers than baseball players. Accuracy was optimized at 70% of MTS for baseballers. Throwing athletes should decrease speed when accuracy is critical. Cricket players could adopt baseball-training practices to improve throwing performance.
Seli, Paul; Cheyne, James Allan; Smilek, Daniel
2012-03-01
In two studies of a GO-NOGO task assessing sustained attention, we examined the effects of (1) altering speed-accuracy trade-offs through instructions (emphasizing both speed and accuracy or accuracy only) and (2) auditory alerts distributed throughout the task. Instructions emphasizing accuracy reduced errors and changed the distribution of GO trial RTs. Additionally, correlations between errors and increasing RTs produced a U-function; excessively fast and slow RTs accounted for much of the variance of errors. Contrary to previous reports, alerts increased errors and RT variability. The results suggest that (1) standard instructions for sustained attention tasks, emphasizing speed and accuracy equally, produce errors arising from attempts to conform to the misleading requirement for speed, which become conflated with attention-lapse produced errors and (2) auditory alerts have complex, and sometimes deleterious, effects on attention. We argue that instructions emphasizing accuracy provide a more precise assessment of attention lapses in sustained attention tasks. Copyright © 2011 Elsevier Inc. All rights reserved.
Analysis conditions of an industrial Al-Mg-Si alloy by conventional and 3D atom probes.
Danoix, F; Miller, M K; Bigot, A
2001-10-01
Industrial 6016 Al-Mg-Si(Cu) alloys are presently regarded as attractive candidates for heat treatable sheet materials. Their mechanical properties can be adjusted for a given application by age hardening of the alloys. The resulting microstructural evolution takes place at the nanometer scale, making the atom probe a well suited instrument to study it. Accuracy of atom probe analysis of these aluminium alloys is a key point for the understanding of the fine scale microstructural evolution. It is known to be strongly dependent on the analysis conditions (such as specimen temperature and pulse fraction) which have been widely studied for ID atom probes. The development of the 3D instruments, as well as the increase of the evaporation pulse repetition rate have led to different analysis conditions, in particular evaporation and detection rates. The influence of various experimental parameters on the accuracy of atom probe data, in particular with regard to hydride formation sensitivity, has been reinvestigated. It is shown that hydrogen contamination is strongly dependent on the electric field at the specimen surface, and that high evaporation rates are beneficial. Conversely, detection rate must be limited to smaller than 0.02 atoms/pulse in order to prevent drastic pile-up effect.
Longitudinal study of fingerprint recognition.
Yoon, Soweon; Jain, Anil K
2015-07-14
Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.
Longitudinal study of fingerprint recognition
Yoon, Soweon; Jain, Anil K.
2015-01-01
Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject’s age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis. PMID:26124106
Larger core size has superior technical and analytical accuracy in bladder tissue microarray.
Eskaros, Adel Rh; Egloff, Shanna A Arnold; Boyd, Kelli L; Richardson, Joyce E; Hyndman, M Eric; Zijlstra, Andries
2017-03-01
The construction of tissue microarrays (TMAs) with cores from a large number of paraffin-embedded tissues (donors) into a single paraffin block (recipient) is an effective method of analyzing samples from many patient specimens simultaneously. For the TMA to be successful, the cores within it must capture the correct histologic areas from the donor blocks (technical accuracy) and maintain concordance with the tissue of origin (analytical accuracy). This can be particularly challenging for tissues with small histological features such as small islands of carcinoma in situ (CIS), thin layers of normal urothelial lining of the bladder, or cancers that exhibit intratumor heterogeneity. In an effort to create a comprehensive TMA of a bladder cancer patient cohort that accurately represents the tumor heterogeneity and captures the small features of normal and CIS, we determined how core size (0.6 vs 1.0 mm) impacted the technical and analytical accuracy of the TMA. The larger 1.0 mm core exhibited better technical accuracy for all tissue types at 80.9% (normal), 94.2% (tumor), and 71.4% (CIS) compared with 58.6%, 85.9%, and 63.8% for 0.6 mm cores. Although the 1.0 mm core provided better tissue capture, increasing the number of replicates from two to three allowed with the 0.6 mm core compensated for this reduced technical accuracy. However, quantitative image analysis of proliferation using both Ki67+ immunofluorescence counts and manual mitotic counts demonstrated that the 1.0 mm core size also exhibited significantly greater analytical accuracy (P=0.004 and 0.035, respectively, r 2 =0.979 and 0.669, respectively). Ultimately, our findings demonstrate that capturing two or more 1.0 mm cores for TMA construction provides superior technical and analytical accuracy over the smaller 0.6 mm cores, especially for tissues harboring small histological features or substantial heterogeneity.
Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B
2017-11-01
Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Xiaoqin; Lee, Kyoung Ok; Medina, Mario A.; Chu, Youhong; Li, Chuanchang
2018-06-01
Differential scanning calorimetry (DSC) analysis is a standard thermal analysis technique used to determine the phase transition temperature, enthalpy, heat of fusion, specific heat and activation energy of phase change materials (PCMs). To determine the appropriate heating rate and sample mass, various DSC measurements were carried out using two kinds of PCMs, namely N-octadecane paraffin and calcium chloride hexahydrate. The variations in phase transition temperature, enthalpy, heat of fusion, specific heat and activation energy were observed within applicable heating rates and sample masses. It was found that the phase transition temperature range increased with increasing heating rate and sample mass; while the heat of fusion varied without any established pattern. The specific heat decreased with the increase of heating rate and sample mass. For accuracy purpose, it is recommended that for PCMs with high thermal conductivity (e.g. hydrated salt) the focus will be on heating rate rather than sample mass.
van Dijken, Bart R J; van Laar, Peter Jan; Holtman, Gea A; van der Hoorn, Anouk
2017-10-01
Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. • Treatment response assessment in high-grade gliomas with anatomical MRI is unreliable • Novel advanced MRI techniques have been studied, but diagnostic accuracy is unknown • Meta-analysis demonstrates that advanced MRI showed higher diagnostic accuracy than anatomical MRI • Highest diagnostic accuracy for spectroscopy and perfusion MRI • Supports the incorporation of advanced MRI in high-grade glioma treatment response assessment.
Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.
Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter
2017-09-01
Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file
NASA Technical Reports Server (NTRS)
Carnes, J. G.
1978-01-01
Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.
6DOF Testing of the SLS Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Geohagan, Kevin W.; Bernard, William P.; Oliver, T. Emerson; Strickland, Dennis J.; Leggett, Jared O.
2018-01-01
The Navigation System on the NASA Space Launch System (SLS) Block 1 vehicle performs initial alignment of the Inertial Navigation System (INS) navigation frame through gyrocompass alignment (GCA). In lieu of direct testing of GCA accuracy in support of requirement verification, the SLS Navigation Team proposed and conducted an engineering test to, among other things, validate the GCA performance and overall behavior of the SLS INS model through comparison with test data. This paper will detail dynamic hardware testing of the SLS INS, conducted by the SLS Navigation Team at Marshall Space Flight Center's 6DOF Table Facility, in support of GCA performance characterization and INS model validation. A 6-DOF motion platform was used to produce 6DOF pad twist and sway dynamics while a simulated SLS flight computer communicated with the INS. Tests conducted include an evaluation of GCA algorithm robustness to increasingly dynamic pad environments, an examination of GCA algorithm stability and accuracy over long durations, and a long-duration static test to gather enough data for Allan Variance analysis. Test setup, execution, and data analysis will be discussed, including analysis performed in support of SLS INS model validation.
ASME V\\&V challenge problem: Surrogate-based V&V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beghini, Lauren L.; Hough, Patricia D.
2015-12-18
The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less
Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro
2015-01-01
The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238
A Cardiac Early Warning System with Multi Channel SCG and ECG Monitoring for Mobile Health
Sahoo, Prasan Kumar; Thakkar, Hiren Kumar; Lee, Ming-Yih
2017-01-01
Use of information and communication technology such as smart phone, smart watch, smart glass and portable health monitoring devices for healthcare services has made Mobile Health (mHealth) an emerging research area. Coronary Heart Disease (CHD) is considered as a leading cause of death world wide and an increasing number of people die prematurely due to CHD. Under such circumstances, there is a growing demand for a reliable cardiac monitoring system to catch the intermittent abnormalities and detect critical cardiac behaviors which lead to sudden death. Use of mobile devices to collect Electrocardiography (ECG), Seismocardiography (SCG) data and efficient analysis of those data can monitor a patient’s cardiac activities for early warning. This paper presents a novel cardiac data acquisition method and combined analysis of Electrocardiography (ECG) and multi channel Seismocardiography (SCG) data. An early warning system is implemented to monitor the cardiac activities of a person and accuracy assessment of the early warning system is conducted for the ECG data only. The assessment shows 88% accuracy and effectiveness of our proposed analysis, which implies the viability and applicability of the proposed early warning system. PMID:28353681
A Cardiac Early Warning System with Multi Channel SCG and ECG Monitoring for Mobile Health.
Sahoo, Prasan Kumar; Thakkar, Hiren Kumar; Lee, Ming-Yih
2017-03-29
Use of information and communication technology such as smart phone, smart watch, smart glass and portable health monitoring devices for healthcare services has made Mobile Health (mHealth) an emerging research area. Coronary Heart Disease (CHD) is considered as a leading cause of death world wide and an increasing number of people die prematurely due to CHD. Under such circumstances, there is a growing demand for a reliable cardiac monitoring system to catch the intermittent abnormalities and detect critical cardiac behaviors which lead to sudden death. Use of mobile devices to collect Electrocardiography (ECG), Seismocardiography (SCG) data and efficient analysis of those data can monitor a patient's cardiac activities for early warning. This paper presents a novel cardiac data acquisition method and combined analysis of Electrocardiography (ECG) and multi channel Seismocardiography (SCG) data. An early warning system is implemented to monitor the cardiac activities of a person and accuracy assessment of the early warning system is conducted for the ECG data only. The assessment shows 88% accuracy and effectiveness of our proposed analysis, which implies the viability and applicability of the proposed early warning system.
Classification of right-hand grasp movement based on EMOTIV Epoc+
NASA Astrophysics Data System (ADS)
Tobing, T. A. M. L.; Prawito, Wijaya, S. K.
2017-07-01
Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
NASA Astrophysics Data System (ADS)
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Inci, Ercan; Ekizoglu, Oguzhan; Turkay, Rustu; Aksoy, Sema; Can, Ismail Ozgur; Solmaz, Dilek; Sayin, Ibrahim
2016-10-01
Morphometric analysis of the mandibular ramus (MR) provides highly accurate data to discriminate sex. The objective of this study was to demonstrate the utility and accuracy of MR morphometric analysis for sex identification in a Turkish population.Four hundred fifteen Turkish patients (18-60 y; 201 male and 214 female) who had previously had multidetector computed tomography scans of the cranium were included in the study. Multidetector computed tomography images were obtained using three-dimensional reconstructions and a volume-rendering technique, and 8 linear and 3 angular values were measured. Univariate, bivariate, and multivariate discriminant analyses were performed, and the accuracy rates for determining sex were calculated.Mandibular ramus values produced high accuracy rates of 51% to 95.6%. Upper ramus vertical height had the highest rate at 95.6%, and bivariate analysis showed 89.7% to 98.6% accuracy rates with the highest ratios of mandibular flexure upper border and maximum ramus breadth. Stepwise discrimination analysis gave a 99% accuracy rate for all MR variables.Our study showed that the MR, in particular morphometric measures of the upper part of the ramus, can provide valuable data to determine sex in a Turkish population. The method combines both anthropological and radiologic studies.
Hwang, Chang Yun; Song, Tae Jun; Moon, Sung-Hoon; Lee, Don; Park, Do Hyun; Seo, Dong Wan; Lee, Sung Koo; Kim, Myung-Hwan
2009-01-01
Background/Aims Although endoscopic ultrasound guided fine needle aspiration (EUS-FNA) has been introduced and its use has been increasing in Korea, there have not been many reports about its performance. The aim of this study was to assess the utility of EUS-FNA without on-site cytopathologist in establishing the diagnosis of solid pancreatic and peripancreatic masses from a single institution in Korea. Methods Medical records of 139 patients who underwent EUS-FNA for pancreatic and peripancreatic solid mass in the year 2007, were retrospectively reviewed. By comparing cytopathologic diagnosis of FNA with final diagnosis, sensitivity, specificity, and accuracy were determined, and factors influencing the accuracy as well as complications were analyzed. Results One hundred twenty out of 139 cases had final diagnosis of malignancy. Sensitivity, specificity, and accuracy of EUS-FNA were 82%, 89%, and 83%, respectively, and positive and negative predictive values were 100% and 46%, respectively. As for factors influencing the accuracy of FNA, lesion size was marginally significant (p-value 0.08) by multivariate analysis. Conclusions EUS-FNA performed without on-site cytopathologist was found to be accurate and safe, and thus EUS-FNA should be a part of the standard management algorithm for pancreatic and peripancreatic mass. PMID:20431733
Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P
2017-10-01
Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.
NASA Astrophysics Data System (ADS)
Hatzenbuhler, Chelsea; Kelly, John R.; Martinson, John; Okum, Sara; Pilgrim, Erik
2017-04-01
High-throughput DNA metabarcoding has gained recognition as a potentially powerful tool for biomonitoring, including early detection of aquatic invasive species (AIS). DNA based techniques are advancing, but our understanding of the limits to detection for metabarcoding complex samples is inadequate. For detecting AIS at an early stage of invasion when the species is rare, accuracy at low detection limits is key. To evaluate the utility of metabarcoding in future fish community monitoring programs, we conducted several experiments to determine the sensitivity and accuracy of routine metabarcoding methods. Experimental mixes used larval fish tissue from multiple “common” species spiked with varying proportions of tissue from an additional “rare” species. Pyrosequencing of genetic marker, COI (cytochrome c oxidase subunit I) and subsequent sequence data analysis provided experimental evidence of low-level detection of the target “rare” species at biomass percentages as low as 0.02% of total sample biomass. Limits to detection varied interspecifically and were susceptible to amplification bias. Moreover, results showed some data processing methods can skew sequence-based biodiversity measurements from corresponding relative biomass abundances and increase false absences. We suggest caution in interpreting presence/absence and relative abundance in larval fish assemblages until metabarcoding methods are optimized for accuracy and precision.
Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.
2014-01-01
Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.
Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.
Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun
2018-05-08
Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.
A Coupled Surface Nudging Scheme for use in Retrospective ...
A surface analysis nudging scheme coupling atmospheric and land surface thermodynamic parameters has been implemented into WRF v3.8 (latest version) for use with retrospective weather and climate simulations, as well as for applications in air quality, hydrology, and ecosystem modeling. This scheme is known as the flux-adjusting surface data assimilation system (FASDAS) developed by Alapaty et al. (2008). This scheme provides continuous adjustments for soil moisture and temperature (via indirect nudging) and for surface air temperature and water vapor mixing ratio (via direct nudging). The simultaneous application of indirect and direct nudging maintains greater consistency between the soil temperature–moisture and the atmospheric surface layer mass-field variables. The new method, FASDAS, consistently improved the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as well as for high resolution regional climate predictions. This new capability has been released in WRF Version 3.8 as option grid_sfdda = 2. This new capability increased the accuracy of atmospheric inputs for use air quality, hydrology, and ecosystem modeling research to improve the accuracy of respective end-point research outcome. IMPACT: A new method, FASDAS, was implemented into the WRF model to consistently improve the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as wel
Analysis the Accuracy of Digital Elevation Model (DEM) for Flood Modelling on Lowland Area
NASA Astrophysics Data System (ADS)
Zainol Abidin, Ku Hasna Zainurin Ku; Razi, Mohd Adib Mohammad; Bukari, Saifullizan Mohd
2018-04-01
Flood is one type of natural disaster that occurs almost every year in Malaysia. Commonly the lowland areas are the worst affected areas. This kind of disaster is controllable by using an accurate data for proposing any kinds of solutions. Elevation data is one of the data used to produce solutions for flooding. Currently, the research about the application of Digital Elevation Model (DEM) in hydrology was increased where this kind of model will identify the elevation for required areas. University of Tun Hussein Onn Malaysia is one of the lowland areas which facing flood problems on 2006. Therefore, this area was chosen in order to produce DEM which focussed on University Health Centre (PKU) and drainage area around Civil and Environment Faculty (FKAAS). Unmanned Aerial Vehicle used to collect aerial photos data then undergoes a process of generating DEM according to three types of accuracy and quality from Agisoft PhotoScan software. The higher the level of accuracy and quality of DEM produced, the longer time taken to generate a DEM. The reading of the errors created while producing the DEM shows almost 0.01 different. Therefore, it has been identified there are some important parameters which influenced the accuracy of DEM.
NASA Astrophysics Data System (ADS)
Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.
2014-01-01
Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer's accuracy of 93% and a user's accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.; Käppeli, Roger
2017-05-01
In this paper we focus on the numerical solution of the induction equation using Runge-Kutta Discontinuous Galerkin (RKDG)-like schemes that are globally divergence-free. The induction equation plays a role in numerical MHD and other systems like it. It ensures that the magnetic field evolves in a divergence-free fashion; and that same property is shared by the numerical schemes presented here. The algorithms presented here are based on a novel DG-like method as it applies to the magnetic field components in the faces of a mesh. (I.e., this is not a conventional DG algorithm for conservation laws.) The other two novel building blocks of the method include divergence-free reconstruction of the magnetic field and multidimensional Riemann solvers; both of which have been developed in recent years by the first author. Since the method is linear, a von Neumann stability analysis is carried out in two-dimensions to understand its stability properties. The von Neumann stability analysis that we develop in this paper relies on transcribing from a modal to a nodal DG formulation in order to develop discrete evolutionary equations for the nodal values. These are then coupled to a suitable Runge-Kutta timestepping strategy so that one can analyze the stability of the entire scheme which is suitably high order in space and time. We show that our scheme permits CFL numbers that are comparable to those of traditional RKDG schemes. We also analyze the wave propagation characteristics of the method and show that with increasing order of accuracy the wave propagation becomes more isotropic and free of dissipation for a larger range of long wavelength modes. This makes a strong case for investing in higher order methods. We also use the von Neumann stability analysis to show that the divergence-free reconstruction and multidimensional Riemann solvers are essential algorithmic ingredients of a globally divergence-free RKDG-like scheme. Numerical accuracy analyses of the RKDG-like schemes are presented and compared with the accuracy of PNPM schemes. It is found that PNPM retrieve much of the accuracy of the RKDG-like schemes while permitting a larger CFL number.
Leonardi Dutra, Kamile; Haas, Letícia; Porporatti, André Luís; Flores-Mir, Carlos; Nascimento Santos, Juliana; Mezzomo, Luis André; Corrêa, Márcio; De Luca Canto, Graziela
2016-03-01
Endodontic diagnosis depends on accurate radiographic examination. Assessment of the location and extent of apical periodontitis (AP) can influence treatment planning and subsequent treatment outcomes. Therefore, this systematic review and meta-analysis assessed the diagnostic accuracy of conventional radiography and cone-beam computed tomographic (CBCT) imaging on the discrimination of AP from no lesion. Eight electronic databases with no language or time limitations were searched. Articles in which the primary objective was to evaluate the accuracy (sensitivity and specificity) of any type of radiographic technique to assess AP in humans were selected. The gold standard was the histologic examination for actual AP (in vivo) or in situ visualization of bone defects for induced artificial AP (in vitro). Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v.5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark) and MetaDisc v.1.4. software (Unit of Clinical Biostatistics Team of the Ramón y Cajal Hospital, Madrid, Spain). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. Only 9 studies met the inclusion criteria and were subjected to a qualitative analysis. A meta-analysis was conducted on 6 of these articles. All of these articles studied artificial AP with induced bone defects. The accuracy values (area under the curve) were 0.96 for CBCT imaging, 0.73 for conventional periapical radiography, and 0.72 for digital periapical radiography. No evidence was found for panoramic radiography. Periapical radiographs (digital and conventional) reported good diagnostic accuracy on the discrimination of artificial AP from no lesions, whereas CBCT imaging showed excellent accuracy values. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Heitz, Richard P; Schall, Jeffrey D
2013-10-19
The stochastic accumulation framework provides a mechanistic, quantitative account of perceptual decision-making and how task performance changes with experimental manipulations. Importantly, it provides an elegant account of the speed-accuracy trade-off (SAT), which has long been the litmus test for decision models, and also mimics the activity of single neurons in several key respects. Recently, we developed a paradigm whereby macaque monkeys trade speed for accuracy on cue during visual search task. Single-unit activity in frontal eye field (FEF) was not homomorphic with the architecture of models, demonstrating that stochastic accumulators are an incomplete description of neural activity under SAT. This paper summarizes and extends this work, further demonstrating that the SAT leads to extensive, widespread changes in brain activity never before predicted. We will begin by reviewing our recently published work that establishes how spiking activity in FEF accomplishes SAT. Next, we provide two important extensions of this work. First, we report a new chronometric analysis suggesting that increases in perceptual gain with speed stress are evident in FEF synaptic input, implicating afferent sensory-processing sources. Second, we report a new analysis demonstrating selective influence of SAT on frequency coupling between FEF neurons and local field potentials. None of these observations correspond to the mechanics of current accumulator models.
Mapping mountain pine beetle mortality through growth trend analysis of time-series landsat data
Liang, Lu; Chen, Yanlei; Hawbaker, Todd J.; Zhu, Zhi-Liang; Gong, Peng
2014-01-01
Disturbances are key processes in the carbon cycle of forests and other ecosystems. In recent decades, mountain pine beetle (MPB; Dendroctonus ponderosae) outbreaks have become more frequent and extensive in western North America. Remote sensing has the ability to fill the data gaps of long-term infestation monitoring, but the elimination of observational noise and attributing changes quantitatively are two main challenges in its effective application. Here, we present a forest growth trend analysis method that integrates Landsat temporal trajectories and decision tree techniques to derive annual forest disturbance maps over an 11-year period. The temporal trajectory component successfully captures the disturbance events as represented by spectral segments, whereas decision tree modeling efficiently recognizes and attributes events based upon the characteristics of the segments. Validated against a point set sampled across a gradient of MPB mortality, 86.74% to 94.00% overall accuracy was achieved with small variability in accuracy among years. In contrast, the overall accuracies of single-date classifications ranged from 37.20% to 75.20% and only become comparable with our approach when the training sample size was increased at least four-fold. This demonstrates that the advantages of this time series work flow exist in its small training sample size requirement. The easily understandable, interpretable and modifiable characteristics of our approach suggest that it could be applicable to other ecoregions.
NASA Astrophysics Data System (ADS)
Sarkar, Atasi; Sengupta, Sanghamitra; Mukherjee, Anirban; Chatterjee, Jyotirmoy
2017-02-01
Infra red (IR) spectral characterization can provide label-free cellular metabolic signatures of normal and diseased circumstances in a rapid and non-invasive manner. Present study endeavoured to enlist Fourier transform infra red (FTIR) spectroscopic signatures for lung normal and cancer cells during chemically induced epithelial mesenchymal transition (EMT) for which global metabolic dimension is not well reported yet. Occurrence of EMT was validated with morphological and immunocytochemical confirmation. Pre-processed spectral data was analyzed using ANOVA and principal component analysis-linear discriminant analysis (PCA-LDA). Significant differences observed in peak area corresponding to biochemical fingerprint (900-1800 cm- 1) and high wave-number (2800-3800 cm- 1) regions contributed to adequate PCA-LDA segregation of cells undergoing EMT. The findings were validated by re-analysis of data using another in-house built binary classifier namely vector valued regularized kernel approximation (VVRKFA), in order to understand EMT progression. To improve the classification accuracy, forward feature selection (FFS) tool was employed in extracting potent spectral signatures by eliminating undesirable noise. Gradual increase in classification accuracy with EMT progression of both cell types indicated prominence of the biochemical alterations. Rapid changes in cellular metabolome noted in cancer cells within first 24 h of EMT induction along with higher classification accuracy for cancer cell groups in comparison to normal cells might be attributed to inherent differences between them. Spectral features were suggestive of EMT triggered changes in nucleic acid, protein, lipid and bound water contents which can emerge as the useful markers to capture EMT related cellular characteristics.
Marraccini, Marisa E.; Weyandt, Lisa L.; Rossi, Joseph S.; Gudmundsdottir, Bergljot Gyda
2016-01-01
Increasing numbers of adults, particularly college students, are misusing prescription stimulants primarily for cognitive/academic enhancement, so it is critical to explore whether empirical findings support neurocognitive benefits of prescription stimulants. Previous meta-analytic studies have supported small benefits from prescription stimulants for the cognitive domains of inhibitory control and memory; however, no meta-analytic studies have examined the effects on processing speed or the potential impairment on other domains of cognition, including planning, decision-making, and cognitive perseveration. Therefore, the present study conducted a meta-analysis of the available literature examining the effects of prescription stimulants on specific measures of processing speed, planning, decision-making, and cognitive perseveration among healthy adult populations. The meta-analysis results indicated a positive influence of prescription stimulant medication on processing speed accuracy, with an overall mean effect size of g = 0.282 (95% CI 0.077, 0.488; n = 345). Neither improvements nor impairments were revealed for planning time, planning accuracy, advantageous decision-making, or cognitive perseveration; however findings are limited by the small number of studies examining these outcomes. Findings support that prescription stimulant medication may indeed act as a neurocognitive enhancer for accuracy measures of processing speed without impeding other areas of cognition. Considering that adults are already engaging in illegal use of prescription stimulants for academic enhancement, as well as the potential for stimulant misuse to have serious side effects, the establishment of public policies informed by interdisciplinary research surrounding this issue, whether restrictive or liberal, is of critical importance. PMID:27454675
NASA Astrophysics Data System (ADS)
Kumar, Dheeraj; Gautam, Amar Kant; Palmate, Santosh S.; Pandey, Ashish; Suryavanshi, Shakti; Rathore, Neha; Sharma, Nayan
2017-08-01
To support the GPM mission which is homologous to its predecessor, the Tropical Rainfall Measuring Mission (TRMM), this study has been undertaken to evaluate the accuracy of Tropical Rainfall Measuring Mission multi-satellite precipitation analysis (TMPA) daily-accumulated precipitation products for 5 years (2008-2012) using the statistical methods and contingency table method. The analysis was performed on daily, monthly, seasonal and yearly basis. The TMPA precipitation estimates were also evaluated for each grid point i.e. 0.25° × 0.25° and for 18 rain gauge stations of the Betwa River basin, India. Results indicated that TMPA precipitation overestimates the daily and monthly precipitation in general, particularly for the middle sub-basin in the non-monsoon season. Furthermore, precision of TMPA precipitation estimates declines with the decrease of altitude at both grid and sub-basin scale. The study also revealed that TMPA precipitation estimates provide better accuracy in the upstream of the basin compared to downstream basin. Nevertheless, the detection capability of daily TMPA precipitation improves with increase in altitude for drizzle rain events. However, the detection capability decreases during non-monsoon and monsoon seasons when capturing moderate and heavy rain events, respectively. The veracity of TMPA precipitation estimates was improved during the rainy season than during the dry season at all scenarios investigated. The analyses suggest that there is a need for better precipitation estimation algorithm and extensive accuracy verification against terrestrial precipitation measurement to capture the different types of rain events more reliably over the sub-humid tropical regions of India.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
Reversing the Course of Forgetting
White, K. Geoffrey; Brown, Glenn S
2011-01-01
Forgetting functions were generated for pigeons in a delayed matching-to-sample task, in which accuracy decreased with increasing retention-interval duration. In baseline training with dark retention intervals, accuracy was high overall. Illumination of the experimental chamber by a houselight during the retention interval impaired performance accuracy by increasing the rate of forgetting. In novel conditions, the houselight was lit at the beginning of a retention interval and then turned off partway through the retention interval. Accuracy was low at the beginning of the retention interval and then increased later in the interval. Thus the course of forgetting was reversed. Such a dissociation of forgetting from the passage of time is consistent with an interference account in which attention or stimulus control switches between the remembering task and extraneous events. PMID:21909163
Multivariate pattern analysis for MEG: A comparison of dissimilarity measures.
Guggenmos, Matthias; Sterzer, Philipp; Cichy, Radoslaw Martin
2018-06-01
Multivariate pattern analysis (MVPA) methods such as decoding and representational similarity analysis (RSA) are growing rapidly in popularity for the analysis of magnetoencephalography (MEG) data. However, little is known about the relative performance and characteristics of the specific dissimilarity measures used to describe differences between evoked activation patterns. Here we used a multisession MEG data set to qualitatively characterize a range of dissimilarity measures and to quantitatively compare them with respect to decoding accuracy (for decoding) and between-session reliability of representational dissimilarity matrices (for RSA). We tested dissimilarity measures from a range of classifiers (Linear Discriminant Analysis - LDA, Support Vector Machine - SVM, Weighted Robust Distance - WeiRD, Gaussian Naïve Bayes - GNB) and distances (Euclidean distance, Pearson correlation). In addition, we evaluated three key processing choices: 1) preprocessing (noise normalisation, removal of the pattern mean), 2) weighting decoding accuracies by decision values, and 3) computing distances in three different partitioning schemes (non-cross-validated, cross-validated, within-class-corrected). Four main conclusions emerged from our results. First, appropriate multivariate noise normalization substantially improved decoding accuracies and the reliability of dissimilarity measures. Second, LDA, SVM and WeiRD yielded high peak decoding accuracies and nearly identical time courses. Third, while using decoding accuracies for RSA was markedly less reliable than continuous distances, this disadvantage was ameliorated by decision-value-weighting of decoding accuracies. Fourth, the cross-validated Euclidean distance provided unbiased distance estimates and highly replicable representational dissimilarity matrices. Overall, we strongly advise the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross-validated Euclidean distance as a reliable and unbiased default choice for RSA. Copyright © 2018 Elsevier Inc. All rights reserved.
Atlas of computerized blood flow analysis in bone disease.
Gandsman, E J; Deutsch, S D; Tyson, I B
1983-11-01
The role of computerized blood flow analysis in routine bone scanning is reviewed. Cases illustrating the technique include proven diagnoses of toxic synovitis, Legg-Perthes disease, arthritis, avascular necrosis of the hip, fractures, benign and malignant tumors, Paget's disease, cellulitis, osteomyelitis, and shin splints. Several examples also show the use of the technique in monitoring treatment. The use of quantitative data from the blood flow, bone uptake phase, and static images suggests specific diagnostic patterns for each of the diseases presented in this atlas. Thus, this technique enables increased accuracy in the interpretation of the radionuclide bone scan.
KASCADE-Grande: Composition studies in the view of the post-LHC hadronic interaction models
NASA Astrophysics Data System (ADS)
Haungs, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Pierro, F. Di; Doll, P.; Engel, R.; Fuhrmann, D.; Gherghel-Lascu, A.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Heck, D.; Hörandel, J. R.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2017-06-01
The KASCADE-Grande experiment has significantly contributed to the current knowledge about the energy spectrum and composition of cosmic rays for energies between the knee and the ankle. Meanwhile, post-LHC versions of the hadronic interaction models are available and used to interpret the entire data set of KASCADE-Grande. In addition, a new, combined analysis of both arrays, KASCADE and Grande, was developed significantly increasing the accuracy of the shower observables. First results of the new analysis with the entire data set of the KASCADE-Grande experiment will be the focus of this contribution.
Pfaff, Miles J; Steinbacher, Derek M
2016-03-01
Three-dimensional analysis and planning is a powerful tool in plastic and reconstructive surgery, enabling improved diagnosis, patient education and communication, and intraoperative transfer to achieve the best possible results. Three-dimensional planning can increase efficiency and accuracy, and entails five core components: (1) analysis, (2) planning, (3) virtual surgery, (4) three-dimensional printing, and (5) comparison of planned to actual results. The purpose of this article is to provide an overview of three-dimensional virtual planning and to provide a framework for applying these systems to clinical practice. Therapeutic, V.
NASA Technical Reports Server (NTRS)
Lebiedzik, Catherine
1995-01-01
Development of design tools to furnish optimal acoustic environments for lightweight aircraft demands the ability to simulate the acoustic system on a workstation. In order to form an effective mathematical model of the phenomena at hand, we have begun by studying the propagation of acoustic waves inside closed spherical shells. Using a fully-coupled fluid-structure interaction model based upon variational principles, we have written a finite element analysis program and are in the process of examining several test cases. Future investigations are planned to increase model accuracy by incorporating non-linear and viscous effects.
Analysis of radioactive strontium-90 in food by Čerenkov liquid scintillation counting.
Pan, Jingjing; Emanuele, Kathryn; Maher, Eileen; Lin, Zhichao; Healey, Stephanie; Regan, Patrick
2017-08-01
A simple liquid scintillation counting method using DGA/TRU resins for removal of matrix/radiometric interferences, Čerenkov counting for measuring 90 Y, and EDXRF for quantifying Y recovery was validated for analyzing 90 Sr in various foods. Analysis of samples containing energetic β emitters required using TRU resin to avoid false detection and positive bias. Additional 34% increase in Y recovery was obtained by stirring the resin while eluting Y with H 2 C 2 O 4 . The method showed acceptable accuracy (±10%), precision (10%), and detectability (~0.09Bqkg -1 ). Published by Elsevier Ltd.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation.
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-04-05
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-01-01
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
Guild, Georgia E.; Stangoulis, James C. R.
2016-01-01
Within the HarvestPlus program there are many collaborators currently using X-Ray Fluorescence (XRF) spectroscopy to measure Fe and Zn in their target crops. In India, five HarvestPlus wheat collaborators have laboratories that conduct this analysis and their throughput has increased significantly. The benefits of using XRF are its ease of use, minimal sample preparation and high throughput analysis. The lack of commercially available calibration standards has led to a need for alternative calibration arrangements for many of the instruments. Consequently, the majority of instruments have either been installed with an electronic transfer of an original grain calibration set developed by a preferred lab, or a locally supplied calibration. Unfortunately, neither of these methods has been entirely successful. The electronic transfer is unable to account for small variations between the instruments, whereas the use of a locally provided calibration set is heavily reliant on the accuracy of the reference analysis method, which is particularly difficult to achieve when analyzing low levels of micronutrient. Consequently, we have developed a calibration method that uses non-matrix matched glass disks. Here we present the validation of this method and show this calibration approach can improve the reproducibility and accuracy of whole grain wheat analysis on 5 different XRF instruments across the HarvestPlus breeding program. PMID:27375644
Solianik, Rima; Satas, Andrius; Mickeviciene, Dalia; Cekanauskaite, Agne; Valanciene, Dovile; Majauskiene, Daiva; Skurvydas, Albertas
2018-06-01
This study aimed to explore the effect of prolonged speed-accuracy motor task on the indicators of psychological, cognitive, psychomotor and motor function. Ten young men aged 21.1 ± 1.0 years performed a fast- and accurate-reaching movement task and a control task. Both tasks were performed for 2 h. Despite decreased motivation, and increased perception of effort as well as subjective feeling of fatigue, speed-accuracy motor task performance improved during the whole period of task execution. After the motor task, the increased working memory function and prefrontal cortex oxygenation at rest and during conflict detection, and the decreased efficiency of incorrect response inhibition and visuomotor tracking were observed. The speed-accuracy motor task increased the amplitude of motor-evoked potentials, while grip strength was not affected. These findings demonstrate that to sustain the performance of 2-h speed-accuracy task under conditions of self-reported fatigue, task-relevant functions are maintained or even improved, whereas less critical functions are impaired.
NASA Astrophysics Data System (ADS)
Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang
2016-10-01
In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.
Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories
NASA Technical Reports Server (NTRS)
Kay-Bunnell, Linda
2004-01-01
The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.
Evaluation of the accuracy of GPS as a method of locating traffic collisions.
DOT National Transportation Integrated Search
2004-06-01
The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...
AVHRR channel selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Mapping land cover of large regions often requires processing of satellite images collected from several time periods at many spectral wavelength channels. However, manipulating and processing large amounts of image data increases the complexity and time, and hence the cost, that it takes to produce a land cover map. Very few studies have evaluated the importance of individual Advanced Very High Resolution Radiometer (AVHRR) channels for discriminating cover types, especially the thermal channels (channels 3, 4 and 5). Studies rarely perform a multi-year analysis to determine the impact of inter-annual variability on the classification results. We evaluated 5 years of AVHRR data using combinations of the original AVHRR spectral channels (1-5) to determine which channels are most important for cover type discrimination, yet stabilize inter-annual variability. Particular attention was placed on the channels in the thermal portion of the spectrum. Fourteen cover types over the entire state of Colorado were evaluated using a supervised classification approach on all two-, three-, four- and five-channel combinations for seven AVHRR biweekly composite datasets covering the entire growing season for each of 5 years. Results show that all three of the major portions of the electromagnetic spectrum represented by the AVHRR sensor are required to discriminate cover types effectively and stabilize inter-annual variability. Of the two-channel combinations, channels 1 (red visible) and 2 (near-infrared) had, by far, the highest average overall accuracy (72.2%), yet the inter-annual classification accuracies were highly variable. Including a thermal channel (channel 4) significantly increased the average overall classification accuracy by 5.5% and stabilized interannual variability. Each of the thermal channels gave similar classification accuracies; however, because of the problems in consistently interpreting channel 3 data, either channel 4 or 5 was found to be a more appropriate choice. Substituting the thermal channel with a single elevation layer resulted in equivalent classification accuracies and inter-annual variability.
Leveraging 3D-HST Grism Redshifts to Quantify Photometric Redshift Performance
NASA Astrophysics Data System (ADS)
Bezanson, Rachel; Wake, David A.; Brammer, Gabriel B.; van Dokkum, Pieter G.; Franx, Marijn; Labbé, Ivo; Leja, Joel; Momcheva, Ivelina G.; Nelson, Erica J.; Quadri, Ryan F.; Skelton, Rosalind E.; Weiner, Benjamin J.; Whitaker, Katherine E.
2016-05-01
We present a study of photometric redshift accuracy in the 3D-HST photometric catalogs, using 3D-HST grism redshifts to quantify and dissect trends in redshift accuracy for galaxies brighter than JH IR > 24 with an unprecedented and representative high-redshift galaxy sample. We find an average scatter of 0.0197 ± 0.0003(1 + z) in the Skelton et al. photometric redshifts. Photometric redshift accuracy decreases with magnitude and redshift, but does not vary monotonically with color or stellar mass. The 1σ scatter lies between 0.01 and 0.03 (1 + z) for galaxies of all masses and colors below z < 2.5 (for JH IR < 24), with the exception of a population of very red (U - V > 2), dusty star-forming galaxies for which the scatter increases to ˜0.1 (1 + z). We find that photometric redshifts depend significantly on galaxy size; the largest galaxies at fixed magnitude have photo-zs with up to ˜30% more scatter and ˜5 times the outlier rate. Although the overall photometric redshift accuracy for quiescent galaxies is better than that for star-forming galaxies, scatter depends more strongly on magnitude and redshift than on galaxy type. We verify these trends using the redshift distributions of close pairs and extend the analysis to fainter objects, where photometric redshift errors further increase to ˜0.046 (1 + z) at {H}F160W=26. We demonstrate that photometric redshift accuracy is strongly filter dependent and quantify the contribution of multiple filter combinations. We evaluate the widths of redshift probability distribution functions and find that error estimates are underestimated by a factor of ˜1.1-1.6, but that uniformly broadening the distribution does not adequately account for fitting outliers. Finally, we suggest possible applications of these data in planning for current and future surveys and simulate photometric redshift performance in the Large Synoptic Survey Telescope, Dark Energy Survey (DES), and combined DES and Vista Hemisphere surveys.
[Development and application of morphological analysis method in Aspergillus niger fermentation].
Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang
2015-02-01
Filamentous fungi are widely used in industrial fermentation. Particular fungal morphology acts as a critical index for a successful fermentation. To break the bottleneck of morphological analysis, we have developed a reliable method for fungal morphological analysis. By this method, we can prepare hundreds of pellet samples simultaneously and obtain quantitative morphological information at large scale quickly. This method can largely increase the accuracy and reliability of morphological analysis result. Based on that, the studies of Aspergillus niger morphology under different oxygen supply conditions and shear rate conditions were carried out. As a result, the morphological responding patterns of A. niger morphology to these conditions were quantitatively demonstrated, which laid a solid foundation for the further scale-up.
Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment
NASA Technical Reports Server (NTRS)
Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.
2012-01-01
Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.
Diagnostic accuracy of physical examination for anterior knee instability: a systematic review.
Leblanc, Marie-Claude; Kowalczuk, Marcin; Andruszkiewicz, Nicole; Simunovic, Nicole; Farrokhyar, Forough; Turnbull, Travis Lee; Debski, Richard E; Ayeni, Olufemi R
2015-10-01
Determining diagnostic accuracy of Lachman, pivot shift and anterior drawer tests versus gold standard diagnosis (magnetic resonance imaging or arthroscopy) for anterior cruciate ligament (ACL) insufficiency cases. Secondarily, evaluating effects of: chronicity, partial rupture, awake versus anaesthetized evaluation. Searching MEDLINE, EMBASE and PubMed identified studies on diagnostic accuracy for ACL insufficiency. Studies identification and data extraction were performed in duplicate. Quality assessment used QUADAS tool, and statistical analyses were completed for pooled sensitivity and specificity. Eight studies were included. Given insufficient data, pooled analysis was only possible for sensitivity on Lachman and pivot shift test. During awake evaluation, sensitivity for the Lachman test was 89 % (95 % CI 0.76, 0.98) for all rupture types, 96 % (95 % CI 0.90, 1.00) for complete ruptures and 68 % (95 % CI 0.25, 0.98) for partial ruptures. For pivot shift in awake evaluation, results were 79 % (95 % CI 0.63, 0.91) for all rupture types, 86 % (95 % CI 0.68, 0.99) for complete ruptures and 67 % (95 % CI 0.47, 0.83) for partial ruptures. Decreased sensitivity of Lachman and pivot shift tests for partial rupture cases and for awake patients raised suspicions regarding the accuracy of these tests for diagnosis of ACL insufficiency. This may lead to further research aiming to improve the understanding of the true accuracy of these physical diagnostic tests and increase the reliability of clinical investigation for this pathology. IV.
NASA Astrophysics Data System (ADS)
Tseng, Chien-Hsun
2018-06-01
This paper aims to develop a multidimensional wave digital filtering network for predicting static and dynamic behaviors of composite laminate based on the FSDT. The resultant network is, thus, an integrated platform that can perform not only the free vibration but also the bending deflection of moderate thick symmetric laminated plates with low plate side-to-thickness ratios (< = 20). Safeguarded by the Courant-Friedrichs-Levy stability condition with the least restriction in terms of optimization technique, the present method offers numerically high accuracy, stability and efficiency to proceed a wide range of modulus ratios for the FSDT laminated plates. Instead of using a constant shear correction factor (SCF) with a limited numerical accuracy for the bending deflection, an optimum SCF is particularly sought by looking for a minimum ratio of change in the transverse shear energy. This way, it can predict as good results in terms of accuracy for certain cases of bending deflection. Extensive simulation results carried out for the prediction of maximum bending deflection have demonstratively proven that the present method outperforms those based on the higher-order shear deformation and layerwise plate theories. To the best of our knowledge, this is the first work that shows an optimal selection of SCF can significantly increase the accuracy of FSDT-based laminates especially compared to the higher order theory disclaiming any correction. The highest accuracy of overall solution is compared to the 3D elasticity equilibrium one.
The Quality and Accuracy of Mobile Apps to Prevent Driving After Drinking Alcohol.
Wilson, Hollie; Stoyanov, Stoyan R; Gandabhai, Shailen; Baldwin, Alexander
2016-08-08
Driving after the consumption of alcohol represents a significant problem globally. Individual prevention countermeasures such as personalized mobile app aimed at preventing such behavior are widespread, but there is little research on their accuracy and evidence base. There has been no known assessment investigating the quality of such apps. This study aimed to determine the quality and accuracy of apps for drink driving prevention by conducting a review and evaluation of relevant mobile apps. A systematic app search was conducted following PRISMA guidelines. App quality was assessed using the Mobile App Rating Scale (MARS). Apps providing blood alcohol calculators (hereafter "calculators") were reviewed against current alcohol advice for accuracy. A total of 58 apps (30 iOS and 28 Android) met inclusion criteria and were included in the final analysis. Drink driving prevention apps had significantly lower engagement and overall quality scores than alcohol management apps. Most calculators provided conservative blood alcohol content (BAC) time until sober calculations. None of the apps had been evaluated to determine their efficacy in changing either drinking or driving behaviors. This novel study demonstrates that most drink driving prevention apps are not engaging and lack accuracy. They could be improved by increasing engagement features, such as gamification. Further research should examine the context and motivations for using apps to prevent driving after drinking in at-risk populations. Development of drink driving prevention apps should incorporate evidence-based information and guidance, lacking in current apps.
The Quality and Accuracy of Mobile Apps to Prevent Driving After Drinking Alcohol
Stoyanov, Stoyan R; Gandabhai, Shailen; Baldwin, Alexander
2016-01-01
Background Driving after the consumption of alcohol represents a significant problem globally. Individual prevention countermeasures such as personalized mobile apps aimed at preventing such behavior are widespread, but there is little research on their accuracy and evidence base. There has been no known assessment investigating the quality of such apps. Objective This study aimed to determine the quality and accuracy of apps for drink driving prevention by conducting a review and evaluation of relevant mobile apps. Methods A systematic app search was conducted following PRISMA guidelines. App quality was assessed using the Mobile App Rating Scale (MARS). Apps providing blood alcohol calculators (hereafter “calculators”) were reviewed against current alcohol advice for accuracy. Results A total of 58 apps (30 iOS and 28 Android) met inclusion criteria and were included in the final analysis. Drink driving prevention apps had significantly lower engagement and overall quality scores than alcohol management apps. Most calculators provided conservative blood alcohol content (BAC) time until sober calculations. None of the apps had been evaluated to determine their efficacy in changing either drinking or driving behaviors. Conclusions This novel study demonstrates that most drink driving prevention apps are not engaging and lack accuracy. They could be improved by increasing engagement features, such as gamification. Further research should examine the context and motivations for using apps to prevent driving after drinking in at-risk populations. Development of drink driving prevention apps should incorporate evidence-based information and guidance, lacking in current apps. PMID:27502956
Information extraction with object based support vector machines and vegetation indices
NASA Astrophysics Data System (ADS)
Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun
2016-07-01
Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.
Van Laere, Koen; Clerinx, Kristien; D'Hondt, Eduard; de Groot, Tjibbe; Vandenberghe, Wim
2010-04-01
Striatal dopamine D(2) receptor (D2R) PET has been proposed to differentiate between Parkinson disease (PD) and multiple-system atrophy with predominant parkinsonism (MSA-P). However, considerable overlap in striatal D(2) binding may exist between PD and MSA-P. It has been shown that imaging of neuronal activity, as determined by metabolism or perfusion, can also help distinguish PD from MSA-P. We investigated whether the differential diagnostic value of (11)C-raclopride PET could be improved by dynamic scan analysis combining D2R binding and regional tracer influx. (11)C-raclopride PET was performed in 9 MSA-P patients (mean age +/- SD, 56.2 +/- 10.2 y; disease duration, 2.9 +/- 0.8 y; median Hoehn-Yahr score, 3), 10 PD patients (mean age +/- SD, 65.7 +/- 8.1 y; disease duration, 3.3 +/- 1.5 y; median Hoehn-Yahr score, 1.5), and 10 healthy controls (mean age +/- SD, 61.6 +/- 6.5 y). Diagnosis was obtained after prolonged follow-up (MSA-P, 5.5 +/- 2.0 y; PD, 6.0 +/- 2.3 y) using validated clinical criteria. Spatially normalized parametric images of binding potential (BP) and local influx ratio (R(1) = K(1)/K'(1)) of (11)C-raclopride were obtained using a voxelwise reference tissue model with occipital cortex as reference region. Stepwise forward discriminant analysis with cross-validation, with and without the inclusion of regional R(1) values, was performed using a predefined volume-of-interest template. Using conventional BP values, we correctly classified 65.5% (all values given with cross-validation) of 29 cases only. The combination of BP and R(1) information increased discrimination accuracy to 79.3%. When healthy controls were not included and patients only were considered, BP information alone discriminated PD and MSA-P in 84.2% of cases, but the combination with R(1) data increased accuracy to 100%. Discriminant analysis using combined striatal D2R BP and cerebral influx ratio information of a single dynamic (11)C-raclopride PET scan distinguishes MSA-P and PD patients with high accuracy and is superior to conventional methods of striatal D2R binding analysis.
Mechanisms of cognitive control in cadet pilots.
Gordon, Shirley; Getter, Nir; Oz, Idit; Garbi, Dror; Todder, Doron
2016-01-01
Optimizing performance of aviators while minimizing risks arising from the exposure to extreme environment, both external and internal, is one of the principles guiding the Israeli Air Force. Young cadets in particular are considered an "at risk" population due to the fact that they have no experience in flight in the first stages of training and are therefore subjects for investigation. In this study, we investigated the cognitive performance of young cadet pilots across different hours of the day. 39 cadets were randomly divided into 3 groups: morning, late afternoon, and late evening groups and then tested on a cognitive battery that contained both simple performance measures but also complex measures like dual-tasking and mental rotation test. The analysis indicated a significant effect of 'time of day' on the participants' accuracy [ F (2, 32) = 3.4, p < 0.05]. In a post hoc pairwise t-tests, we found a near significant ( p = 0.52) increase in participants' accuracy and a significant increase [ F (2, 32) = 4.5, p < 0.05] in participants' reaction time in the late evening group as compared to the morning group. We also found a differential effect of dual tasking on accuracy in the different daytimes [ F (2, 33) = 5.6, p < 0.01]. In a post hoc analysis, we found that accuracy in the 1-back task deteriorates from single task condition to the dual task condition only in the morning group ( p < 0.05), but not in the late evening or late-afternoon group. This 'trade-off' behavior, slowing down in order to perform better, in the late evening group may be a result of a voluntary control mechanism (top-down processes) activated at night, in this group. The combination of feeling fatigue, along with the understanding that complex tasks are more resource consuming, caused the cadets to check and double-check before answering, whereas in the morning group, they felt alert and vital, and acted more reactively, ended in an impulsive manner that caused to inaccurate performance.
Südmeyer, Martin; Antke, Christina; Zizek, Tanja; Beu, Markus; Nikolaus, Susanne; Wojtecki, Lars; Schnitzler, Alfons; Müller, Hans-Wilhelm
2011-05-01
In vivo molecular imaging of pre- and postsynaptic nigrostriatal neuronal degeneration and sympathetic cardiac innervation with SPECT is used to distinguish idiopathic Parkinson disease (PD) from atypical parkinsonian disorder (APD). However, the diagnostic accuracy of these imaging approaches as stand-alone procedures is often unsatisfying. The aim of this study was therefore to evaluate to which extent diagnostic accuracy can be increased by their combined use together with a multidimensional statistical algorithm. The SPECT radiotracers (123)I-(S)-2-hydroxy-3-iodo-6-methoxy-N-[1-ethyl-2-pyrrodinyl)-methyl]benzamide (IBZM), (123)I-N-ω-fluoropropyl-2β-carbomethoxy-3β-(4-iodophenyl)nortropan (FP-CIT), and meta-(123)I-iodobenzylguanidine (MIBG) were used to assess striatal postsynaptic D(2) receptor binding, striatal presynaptic dopamine transporter binding, and myocardial adrenergic innervation, respectively. Thirty-one PD and 17 APD patients were prospectively investigated. PD and APD diagnoses were established using consensus criteria and reevaluated after 37.4 ± 12.4 and 26 ± 11.6 mo in PD and APD, respectively. Test accuracy (TA) for PD-APD differentiation was computed for all logical (Boolean) combinations of imaging modalities by receiver-operating-characteristic analysis--that is, after multidimensional optimization of cutoff values. Analysis showed moderate TA for PD-APD differentiation using each molecular approach alone (IBZM, 79%; MIBG, 73%; and FP-CIT, 73%). For combined use, the highest TA resulted under the assumption that at least 2 of the 3 biologic markers had to be positive for APD using the following cutoff values: 1.46 or less for IBZM, less than 2.10 for FP-CIT, and greater than 1.43 for MIBG. This algorithm distinguished APD from PD with a sensitivity of 94%, specificity of 94% (TA, 94%), positive predictive value of 89%, and negative predictive value of 97%. Results suggest that the multidimensional combination of FP-CIT, IBZM, and MIBG scintigraphy is likely to significantly increase TA in differentiating PD from APD. The differential diagnosis of degenerative parkinsonism may thus be facilitated.
The science of visual analysis at extreme scale
NASA Astrophysics Data System (ADS)
Nowell, Lucy T.
2011-01-01
Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar
With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun
2017-01-01
Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451
Feliciano, Rodrigo P; Shea, Michael P; Shanmuganayagam, Dhanansayan; Krueger, Christian G; Howell, Amy B; Reed, Jess D
2012-05-09
The 4-(dimethylamino)cinnamaldehyde (DMAC) assay is currently used to quantify proanthocyanidin (PAC) content in cranberry products. However, this method suffers from issues of accuracy and precision in the analysis and comparison of PAC levels across a broad range of cranberry products. Current use of procyanidin A2 as a standard leads to an underestimation of PACs content in certain cranberry products, especially those containing higher molecular weight PACs. To begin to address the issue of accuracy, a method for the production of a cranberry PAC standard, derived from an extraction of cranberry (c-PAC) press cake, was developed and evaluated. Use of the c-PAC standard to quantify PAC content in cranberry samples resulted in values that were 2.2 times higher than those determined by procyanidin A2. Increased accuracy is critical for estimating PAC content in relationship to research on authenticity, efficacy, and bioactivity, especially in designing clinical trials for determination of putative health benefits.
Braun, Niclas; Debener, Stefan; Sölle, Ariane; Kranczioch, Cornelia; Hildebrandt, Helmut
2015-01-01
Deficits in sustaining attention are common in various organic brain diseases. A recent study proposed self-alert training (SAT) as a technique to improve sustained attention. In the SAT, individuals learn to gain volitional control over their own state of arousal by means of electrodermal biofeedback. In this study, we investigated the behavioral, electrodermal, and electroencephalogram correlates of the SAT with a blinded, randomized, and active-controlled pre-post study design. Sustained attention capacity was assessed with the Sustained Attention to Response Task (SART). The SAT resulted in strong phasic increases in skin conductance response (SCR), but endogenous control of SCR without feedback was problematic. Electroencephalogram analysis revealed stronger alpha reduction during SART for the SAT than for the control group. Behaviorally, the SAT group performed more accurately and more slowly after intervention than the control group. The study provides further evidence that SAT helps to maintain SART accuracy over prolonged periods of time. Whether this accuracy is more related to sustained attention or response inhibition is discussed.
Geoscience laser altimeter system-stellar reference system
NASA Astrophysics Data System (ADS)
Millar, Pamela S.; Sirota, J. Marcos
1998-01-01
GLAS is an EOS space-based laser altimeter being developed to profile the height of the Earth's ice sheets with ~15 cm single shot accuracy from space under NASA's Mission to Planet Earth (MTPE). The primary science goal of GLAS is to determine if the ice sheets are increasing or diminishing for climate change modeling. This is achieved by measuring the ice sheet heights over Greenland and Antarctica to 1.5 cm/yr over 100 km×100 km areas by crossover analysis (Zwally 1994). This measurement performance requires the instrument to determine the pointing of the laser beam to ~5 urad (1 arcsecond), 1-sigma, with respect to the inertial reference frame. The GLAS design incorporates a stellar reference system (SRS) to relate the laser beam pointing angle to the star field with this accuracy. This is the first time a spaceborne laser altimeter is measuring pointing to such high accuracy. The design for the stellar reference system combines an attitude determination system (ADS) with a laser reference system (LRS) to meet this requirement. The SRS approach and expected performance are described in this paper.
The advantages of the surface Laplacian in brain-computer interface research.
McFarland, Dennis J
2015-09-01
Brain-computer interface (BCI) systems frequently use signal processing methods, such as spatial filtering, to enhance performance. The surface Laplacian can reduce spatial noise and aid in identification of sources. In BCI research, these two functions of the surface Laplacian correspond to prediction accuracy and signal orthogonality. In the present study, an off-line analysis of data from a sensorimotor rhythm-based BCI task dissociated these functions of the surface Laplacian by comparing nearest-neighbor and next-nearest neighbor Laplacian algorithms. The nearest-neighbor Laplacian produced signals that were more orthogonal while the next-nearest Laplacian produced signals that resulted in better accuracy. Both prediction and signal identification are important for BCI research. Better prediction of user's intent produces increased speed and accuracy of communication and control. Signal identification is important for ruling out the possibility of control by artifacts. Identifying the nature of the control signal is relevant both to understanding exactly what is being studied and in terms of usability for individuals with limited motor control. Copyright © 2014 Elsevier B.V. All rights reserved.