The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2016-02-01
We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.
Donegan, Ryan J; Stauffer, Anthony; Heaslet, Michael; Poliskie, Michael
Plantar plate pathology has gained noticeable attention in recent years as an etiology of lesser metatarsophalangeal joint pain. The heightened clinical awareness has led to the need for more effective diagnostic imaging accuracy. Numerous reports have established the accuracy of both magnetic resonance imaging and ultrasonography for the diagnosis of plantar plate pathology. However, no conclusions have been made regarding which is the superior imaging modality. The present study reports a case series directly comparing high-resolution dynamic ultrasonography and magnetic resonance imaging. A multicenter retrospective comparison of magnetic resonance imaging versus high-resolution dynamic ultrasonography to evaluate plantar plate pathology with surgical confirmation was conducted. The sensitivity, specificity, and positive and negative predictive values for magnetic resonance imaging were 60%, 100%, 100%, and 33%, respectively. The overall diagnostic accuracy compared with the intraoperative findings was 66%. The sensitivity, specificity, and positive and negative predictive values for high-resolution dynamic ultrasound imaging were 100%, 100%, 100%, and 100%, respectively. The overall diagnostic accuracy compared with the intraoperative findings was 100%. The p value using Fisher's exact test for magnetic resonance imaging and high-resolution dynamic ultrasonography was p = .45, a difference that was not statistically significant. High-resolution dynamic ultrasonography had greater accuracy than magnetic resonance imaging in diagnosing lesser metatarsophalangeal joint plantar plate pathology, although the difference was not statistically significant. The present case series suggests that high-resolution dynamic ultrasonography can be considered an equally accurate imaging modality for plantar plate pathology at a potential cost savings compared with magnetic resonance imaging. Therefore, high-resolution dynamic ultrasonography warrants further investigation in a prospective study. Copyright © 2016 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Zielinski, Katie; McLaughlin, T. F.; Derby, K. Mark
2012-01-01
The purpose of this study was to evaluate the effectiveness of cover, copy, and compare (CCC) on spelling accuracy for three high school aged students with learning disabilities. CCC is a student-managed procedure that teaches discrete skills through self-tutoring and error correction. The effectiveness of CCC was evaluated using a multiple…
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
NASA Astrophysics Data System (ADS)
Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan
2018-03-01
High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.
NASA Technical Reports Server (NTRS)
Kranz, David William
2010-01-01
The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.
Chan, Johanna L; Lin, Li; Feiler, Michael; Wolf, Andrew I; Cardona, Diana M; Gellad, Ziad F
2012-11-07
To evaluate accuracy of in vivo diagnosis of adenomatous vs non-adenomatous polyps using i-SCAN digital chromoendoscopy compared with high-definition white light. This is a single-center comparative effectiveness pilot study. Polyps (n = 103) from 75 average-risk adult outpatients undergoing screening or surveillance colonoscopy between December 1, 2010 and April 1, 2011 were evaluated by two participating endoscopists in an academic outpatient endoscopy center. Polyps were evaluated both with high-definition white light and with i-SCAN to make an in vivo prediction of adenomatous vs non-adenomatous pathology. We determined diagnostic characteristics of i-SCAN and high-definition white light, including sensitivity, specificity, and accuracy, with regards to identifying adenomatous vs non-adenomatous polyps. Histopathologic diagnosis was the gold standard comparison. One hundred and three small polyps, detected from forty-three patients, were included in the analysis. The average size of the polyps evaluated in the analysis was 3.7 mm (SD 1.3 mm, range 2 mm to 8 mm). Formal histopathology revealed that 54/103 (52.4%) were adenomas, 26/103 (25.2%) were hyperplastic, and 23/103 (22.3%) were other diagnoses include "lymphoid aggregates", "non-specific colitis," and "no pathologic diagnosis." Overall, the combined accuracy of endoscopists for predicting adenomas was identical between i-SCAN (71.8%, 95%CI: 62.1%-80.3%) and high-definition white light (71.8%, 95%CI: 62.1%-80.3%). However, the accuracy of each endoscopist differed substantially, where endoscopist A demonstrated 63.0% overall accuracy (95%CI: 50.9%-74.0%) as compared with endoscopist B demonstrating 93.3% overall accuracy (95%CI: 77.9%-99.2%), irrespective of imaging modality. Neither endoscopist demonstrated a significant learning effect with i-SCAN during the study. Though endoscopist A increased accuracy using i-SCAN from 59% (95%CI: 42.1%-74.4%) in the first half to 67.6% (95%CI: 49.5%-82.6%) in the second half, and endoscopist B decreased accuracy using i-SCAN from 100% (95%CI: 80.5%-100.0%) in the first half to 84.6% (95%CI: 54.6%-98.1%) in the second half, neither of these differences were statistically significant. i-SCAN and high-definition white light had similar efficacy predicting polyp histology. Endoscopist training likely plays a critical role in diagnostic test characteristics and deserves further study.
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn
NASA Astrophysics Data System (ADS)
Hu, Y.; Ma, Y.; An, J.
2018-04-01
Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.
Systematic review of discharge coding accuracy
Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.
2012-01-01
Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302
Lee, Jeong Sub; Kim, Jung Hoon; Kim, Yong Jae; Ryu, Ji Kon; Kim, Yong-Tae; Lee, Jae Young; Han, Joon Koo
2017-07-01
To compare the diagnostic accuracy of transabdominal high-resolution ultrasound (HRUS) for staging gallbladder cancer and differential diagnosis of neoplastic polyps compared with endoscopic ultrasound (EUS) and pathology. Among 125 patients who underwent both HRUS and EUS, we included 29 pathologically proven cancers (T1 = 7, T2 = 19, T3 = 3) including 15 polypoid cancers and 50 surgically proven polyps (neoplastic = 30, non-neoplastic = 20). We reviewed formal reports and assessed the accuracy of HRUS and EUS for diagnosing cancer as well as the differential diagnosis of neoplastic polyps. Statistical analyses were performed using chi-square tests. The sensitivity, specificity, PPV, and NPV for gallbladder cancer were 82.7 %, 44.4 %, 82.7 %, and 44 % using HRUS and 86.2 %, 22.2 %, 78.1 %, and 33.3 % using EUS. HRUS and EUS correctly diagnosed the stage in 13 and 12 patients. The sensitivity, specificity, PPV, and NPV for neoplastic polyps were 80 %, 80 %, 86 %, and 73 % using HRUS and 73 %, 85 %, 88 %, and 69 % using EUS. Single polyps (8/20 vs. 21/30), larger (1.0 ± 0.28 cm vs. 1.9 ± 0.85 cm) polyps, and older age (52.5 ± 13.2 vs. 66.1 ± 10.3 years) were common in neoplastic polyps (p < 0.05). Transabdominal HRUS showed comparable accuracy for diagnosing gallbladder cancer and differentiating neoplastic polyps compared with EUS. HRUS is also easy to use during our routine ultrasound examinations. • HRUS showed comparable diagnostic accuracy for GB cancer compared with EUS. • HRUS and EUS showed similar diagnostic accuracy for differentiating neoplastic polyps. • Single, larger polyps and older age were common in neoplastic polyps. • HRUS is less invasive compared with EUS.
Schoch, Ashlee H; Raynor, Hollie A
2012-01-01
Underreporting in self-reported dietary intake has been linked to dietary restraint (DR) and social desirability (SD), however few investigations have examined the influence of both DR and SD on reporting accuracy and used objective, rather than estimated, measures to determine dietary reporting accuracy. This study investigated accuracy of reporting consumption of a laboratory meal during a 24-hour dietary recall (24HR) in 38 healthy, college-aged, normal-weight women, categorized as high or low in DR and SD. Participants consumed a lunch of four foods (sandwich wrap, chips, fruit, and ice cream) in a laboratory and completed a telephone 24HR the following day. Accuracy of reported energy intake of the meal=((reported energy intake-measured energy intake)/measured energy intake)×100 [positive numbers=overreporting]. Overreporting of energy intake occurred in all groups (overall accuracy rate=43.1±49.9%). SD-high as compared to SD-low more accurately reported energy intake of chips (19.8±56.2% vs. 117.1±141.3%, p<0.05) and ice cream (17.2±78.2% vs. 71.6±82.7%, p<0.05). SD-high as compared to SD-low more accurately reported overall energy intake (29.8±48.2% vs. 58.0±48.8%, p<0.05). To improve accuracy of dietary assessment, future research should investigate factors contributing to inaccuracies in dietary reporting and the best methodology to use to determine dietary reporting accuracy. Copyright © 2011 Elsevier Ltd. All rights reserved.
Accuracy of patient-specific guided glenoid baseplate positioning for reverse shoulder arthroplasty.
Levy, Jonathan C; Everding, Nathan G; Frankle, Mark A; Keppler, Louis J
2014-10-01
The accuracy of reproducing a surgical plan during shoulder arthroplasty is improved by computer assistance. Intraoperative navigation, however, is challenged by increased surgical time and additional technically difficult steps. Patient-matched instrumentation has the potential to reproduce a similar degree of accuracy without the need for additional surgical steps. The purpose of this study was to examine the accuracy of patient-specific planning and a patient-specific drill guide for glenoid baseplate placement in reverse shoulder arthroplasty. A patient-specific glenoid baseplate drill guide for reverse shoulder arthroplasty was produced for 14 cadaveric shoulders based on a plan developed by a virtual preoperative 3-dimensional planning system using thin-cut computed tomography images. Using this patient-specific guide, high-volume shoulder surgeons exposed the glenoid through a deltopectoral approach and drilled the bicortical pathway defined by the guide. The trajectory of the drill path was compared with the virtual preoperative planned position using similar thin-cut computed tomography images to define accuracy. The drill pathway defined by the patient-matched guide was found to be highly accurate when compared with the preoperative surgical plan. The translational accuracy was 1.2 ± 0.7 mm. The accuracy of inferior tilt was 1.2° ± 1.2°. The accuracy of glenoid version was 2.6° ± 1.7°. The use of patient-specific glenoid baseplate guides is highly accurate in reproducing a virtual 3-dimensional preoperative plan. This technique delivers the accuracy observed using computerized navigation without any additional surgical steps or technical challenges. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
Accuracy Assessment of Coastal Topography Derived from Uav Images
NASA Astrophysics Data System (ADS)
Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.
2016-06-01
To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Anxiety, anticipation and contextual information: A test of attentional control theory.
Cocks, Adam J; Jackson, Robin C; Bishop, Daniel T; Williams, A Mark
2016-09-01
We tested the assumptions of Attentional Control Theory (ACT) by examining the impact of anxiety on anticipation using a dynamic, time-constrained task. Moreover, we examined the involvement of high- and low-level cognitive processes in anticipation and how their importance may interact with anxiety. Skilled and less-skilled tennis players anticipated the shots of opponents under low- and high-anxiety conditions. Participants viewed three types of video stimuli, each depicting different levels of contextual information. Performance effectiveness (response accuracy) and processing efficiency (response accuracy divided by corresponding mental effort) were measured. Skilled players recorded higher levels of response accuracy and processing efficiency compared to less-skilled counterparts. Processing efficiency significantly decreased under high- compared to low-anxiety conditions. No difference in response accuracy was observed. When reviewing directional errors, anxiety was most detrimental to performance in the condition conveying only contextual information, suggesting that anxiety may have a greater impact on high-level (top-down) cognitive processes, potentially due to a shift in attentional control. Our findings provide partial support for ACT; anxiety elicited greater decrements in processing efficiency than performance effectiveness, possibly due to predominance of the stimulus-driven attentional system.
Relative Navigation of Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, J. Russell; Grambling, Cheryl
2002-01-01
This paper compares autonomous relative navigation performance for formations in eccentric, medium and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS), crosslink, and celestial object measurements. For close formations, the relative navigation accuracy is highly dependent on the magnitude of the uncorrelated measurement errors. A relative navigation position accuracy of better than 10 centimeters root-mean-square (RMS) can be achieved for medium-altitude formations that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 15 meters RMS can be achieved for high-altitude formations that have sparse tracking of the GPS signals. The addition of crosslink measurements can significantly improve relative navigation accuracy for formations that use sparse GPS tracking or celestial object measurements for absolute navigation.
Effect of recent popularity on heat-conduction based recommendation models
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Dong, Qiang; Shi, Yang-Bo; Fu, Yan; He, Jia-Lin
2017-05-01
Accuracy and diversity are two important measures in evaluating the performance of recommender systems. It has been demonstrated that the recommendation model inspired by the heat conduction process has high diversity yet low accuracy. Many variants have been introduced to improve the accuracy while keeping high diversity, most of which regard the current node-degree of an item as its popularity. However in this way, a few outdated items of large degree may be recommended to an enormous number of users. In this paper, we take the recent popularity (recently increased item degrees) into account in the heat-conduction based methods, and propose accordingly the improved recommendation models. Experimental results on two benchmark data sets show that the accuracy can be largely improved while keeping the high diversity compared with the original models.
Junod, Olivier; de Roten, Yves; Martinez, Elena; Drapeau, Martin; Despland, Jean-Nicolas
2005-12-01
This pilot study examined the accuracy of therapist defence interpretations (TAD) in high-alliance patients (N = 7) and low-alliance patients (N = 8). TAD accuracy was assessed in the two subgroups by comparing for each case the patient's most frequent defensive level with the most frequent defensive level addressed by the therapist when making defence interpretations. Results show that in high-alliance patient-therapist dyads, the therapists tend to address accurate or higher (more mature) defensive level than patients most frequent level. On the other hand, the therapists address lower (more immature) defensive level in low-alliance dyads. These results are discussed along with possible ways to better assess TAD accuracy.
Frouzan, Arash; Masoumi, Kambiz; Delirroyfard, Ali; Mazdaie, Behnaz; Bagherzadegan, Elnaz
2017-08-01
Long bone fractures are common injuries caused by trauma. Some studies have demonstrated that ultrasound has a high sensitivity and specificity in the diagnosis of upper and lower extremity long bone fractures. The aim of this study was to determine the accuracy of ultrasound compared with plain radiography in diagnosis of upper and lower extremity long bone fractures in traumatic patients. This cross-sectional study assessed 100 patients admitted to the emergency department of Imam Khomeini Hospital, Ahvaz, Iran with trauma to the upper and lower extremities, from September 2014 through October 2015. In all patients, first ultrasound and then standard plain radiography for the upper and lower limb was performed. Data were analyzed by SPSS version 21 to determine the specificity and sensitivity. The mean age of patients with upper and lower limb trauma were 31.43±12.32 years and 29.63±5.89 years, respectively. Radius fracture was the most frequent compared to other fractures (27%). Sensitivity, specificity, positive predicted value, and negative predicted value of ultrasound compared with plain radiography in the diagnosis of upper extremity long bones were 95.3%, 87.7%, 87.2% and 96.2%, respectively, and the highest accuracy was observed in left arm fractures (100%). Tibia and fibula fractures were the most frequent types compared to other fractures (89.2%). Sensitivity, specificity, PPV and NPV of ultrasound compared with plain radiography in the diagnosis of upper extremity long bone fractures were 98.6%, 83%, 65.4% and 87.1%, respectively, and the highest accuracy was observed in men, lower ages and femoral fractures. The results of this study showed that ultrasound compared with plain radiography has a high accuracy in the diagnosis of upper and lower extremity long bone fractures.
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
NASA Astrophysics Data System (ADS)
Hyun, Jae-Sang; Li, Beiwen; Zhang, Song
2017-07-01
This paper presents our research findings on high-speed high-accuracy three-dimensional shape measurement using digital light processing (DLP) technologies. In particular, we compare two different sinusoidal fringe generation techniques using the DLP projection devices: direct projection of computer-generated 8-bit sinusoidal patterns (a.k.a., the sinusoidal method), and the creation of sinusoidal patterns by defocusing binary patterns (a.k.a., the binary defocusing method). This paper mainly examines their performance on high-accuracy measurement applications under precisely controlled settings. Two different projection systems were tested in this study: a commercially available inexpensive projector and the DLP development kit. Experimental results demonstrated that the binary defocusing method always outperforms the sinusoidal method if a sufficient number of phase-shifted fringe patterns can be used.
Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro
2016-01-01
AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images, as compared to the reference magnetic resonance images. The difference in accuracy was statistically significant (P = 0.02). CONCLUSION: New generation CT scanner, using high resolution MPR images, represents a reliable diagnostic tool in assessment of loco-regional and whole body staging of advanced rectal cancer, especially in patients with MRI contraindications. PMID:27239115
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
Misawa, Masashi; Kudo, Shin-Ei; Mori, Yuichi; Takeda, Kenichi; Maeda, Yasuharu; Kataoka, Shinichi; Nakamura, Hiroki; Kudo, Toyoki; Wakamura, Kunihiko; Hayashi, Takemasa; Katagiri, Atsushi; Baba, Toshiyuki; Ishida, Fumio; Inoue, Haruhiro; Nimura, Yukitaka; Oda, Msahiro; Mori, Kensaku
2017-05-01
Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellens, N; Farahani, K
2015-06-15
Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precisionmore » of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many preclinical applications including focused drug delivery and thermal therapy. Funding support provided by Philips Healthcare.« less
Weitz, Jochen; Deppe, Herbert; Stopp, Sebastian; Lueth, Tim; Mueller, Steffen; Hohlweg-Majert, Bettina
2011-12-01
The aim of this study is to evaluate the accuracy of a surgical template-aided implant placement produced by rapid prototyping using a DICOM dataset from cone beam computer tomography (CBCT). On the basis of CBCT scans (Sirona® Galileos), a total of ten models were produced using a rapid-prototyping three-dimensional printer. On the same patients, impressions were performed to compare fitting accuracy of both methods. From the models made by impression, templates were produced and accuracy was compared and analyzed with the rapid-prototyping model. Whereas templates made by conventional procedure had an excellent accuracy, the fitting accuracy of those produced by DICOM datasets was not sufficient. Deviations ranged between 2.0 and 3.5 mm, after modification of models between 1.4 and 3.1 mm. The findings of this study suggest that the accuracy of the low-dose Sirona Galileos® DICOM dataset seems to show a high deviation, which is not useable for accurate surgical transfer for example in implant surgery.
Rational calculation accuracy in acousto-optical matrix-vector processor
NASA Astrophysics Data System (ADS)
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
McRobert, Allistair Paul; Causer, Joe; Vassiliadis, John; Watterson, Leonie; Kwan, James; Williams, Mark A
2013-06-01
It is well documented that adaptations in cognitive processes with increasing skill levels support decision making in multiple domains. We examined skill-based differences in cognitive processes in emergency medicine physicians, and whether performance was significantly influenced by the removal of contextual information related to a patient's medical history. Skilled (n=9) and less skilled (n=9) emergency medicine physicians responded to high-fidelity simulated scenarios under high- and low-context information conditions. Skilled physicians demonstrated higher diagnostic accuracy irrespective of condition, and were less affected by the removal of context-specific information compared with less skilled physicians. The skilled physicians generated more options, and selected better quality options during diagnostic reasoning compared with less skilled counterparts. These cognitive processes were active irrespective of the level of context-specific information presented, although high-context information enhanced understanding of the patients' symptoms resulting in higher diagnostic accuracy. Our findings have implications for scenario design and the manipulation of contextual information during simulation training.
NASA Astrophysics Data System (ADS)
Peterson, E. R.; Stanton, T. P.
2016-12-01
Determining ice concentration in the Arctic is necessary to track significant changes in sea ice edge extent. Sea ice concentrations are also needed to interpret data collected by in-situ instruments like buoys, as the amount of ice versus water in a given area determines local solar heating. Ice concentration products are now routinely derived from satellite radiometers including the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E), the Advanced Microwave Scanning Radiometer 2 (AMSR2), the Special Sensor Microwave Imager (SSMI), and the Special Sensor Microwave Imager/Sounder (SSMIS). While these radiometers are viewed as reliable to monitor long-term changes in sea ice extent, their accuracy should be analyzed, and compared to determine which radiometer performs best over smaller features such as melt ponds, and how seasonal conditions affect accuracy. Knowledge of the accuracy of radiometers at high resolution can help future researchers determine which radiometer to use, and be aware of radiometer shortcomings in different ice conditions. This will be especially useful when interpreting data from in-situ instruments which deal with small scale measurements. In order to compare these passive microwave radiometers, selected high spatial resolution one-meter resolution Medea images, archived at the Unites States Geological Survey, are used for ground truth comparison. Sea ice concentrations are derived from these images in an interactive process, although estimates are not perfect ground truth due to exposure of images, shadowing and cloud cover. 68 images are retrieved from the USGS website and compared with 9 useable, collocated SSMI, 33 SSMIS, 36 AMSRE, and 14 AMSR2 ice concentrations in the Arctic Ocean. We analyze and compare the accuracy of radiometer instrumentation in differing ice conditions.
Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart
2017-06-01
This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.
Navigating highly elliptical earth orbiters with simultaneous VLBI from orthogonal baseline pairs
NASA Technical Reports Server (NTRS)
Frauenholz, Raymond B.
1986-01-01
Navigation strategies for determining highly elliptical orbits with VLBI are described. The predicted performance of wideband VLBI and Delta VLBI measurements obtained by orthogonal baseline pairs are compared for a 16-hr equatorial orbit. It is observed that the one-sigma apogee position accuracy improves two orders of magnitude to the meter level when Delta VLBI measurements are added to coherent Doppler and range, and the simpler VLBI strategy provides nearly the same orbit accuracy. The effects of differential measurement noise and acquisition geometry on orbit accuracy are investigated. The data reveal that quasar position uncertainty limits the accuracy of wideband Delta VLBI measurements, and that polar motion and baseline uncertainties and offsets between station clocks affect the wideband VLBI data. It is noted that differential one-way range (DOR) has performance nearly equal to that of the more complex Delta DOR and is recommended for use on spacecraft in high elliptical orbits.
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
CHARMS: The Cryogenic, High-Accuracy Refraction Measuring System
NASA Technical Reports Server (NTRS)
Frey, Bradley; Leviton, Douglas
2004-01-01
The success of numerous upcoming NASA infrared (IR) missions will rely critically on accurate knowledge of the IR refractive indices of their constituent optical components at design operating temperatures. To satisfy the demand for such data, we have built a Cryogenic, High-Accuracy Refraction Measuring System (CHARMS), which, for typical 1R materials. can measure the index of refraction accurate to (+ or -) 5 x 10sup -3 . This versatile, one-of-a-kind facility can also measure refractive index over a wide range of wavelengths, from 0.105 um in the far-ultraviolet to 6 um in the IR, and over a wide range of temperatures, from 10 K to 100 degrees C, all with comparable accuracies. We first summarize the technical challenges we faced and engineering solutions we developed during the construction of CHARMS. Next we present our "first light," index of refraction data for fused silica and compare our data to previously published results.
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Isma’eel, Hussain A.; Sakr, George E.; Almedawar, Mohamad M.; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein
2015-01-01
Background High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. Methods We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients’ behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Results Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Conclusions Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient’s behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals. PMID:26090333
Ka-Band Radar Terminal Descent Sensor
NASA Technical Reports Server (NTRS)
Pollard, Brian; Berkun, Andrew; Tope, Michael; Andricos, Constantine; Okonek, Joseph; Lou, Yunling
2007-01-01
The terminal descent sensor (TDS) is a radar altimeter/velocimeter that improves the accuracy of velocity sensing by more than an order of magnitude when compared to existing sensors. The TDS is designed for the safe planetary landing of payloads, and may be used in helicopters and fixed-wing aircraft requiring high-accuracy velocity sensing
Curriculum-Based Measurement of Reading Growth: Weekly versus Intermittent Progress Monitoring
ERIC Educational Resources Information Center
Jenkins, Joseph; Schulze, Margaret; Marti, Allison; Harbaugh, Allen G.
2017-01-01
We examined the idea that leaner schedules of progress monitoring (PM) can lighten assessment demands without undermining decision-making accuracy. Using curriculum-based measurement of reading, we compared effects on decision accuracy of 5 intermittent PM schedules relative to that of every-week PM. For participating students with high-incidence…
NASA Technical Reports Server (NTRS)
Jensen, J. R.; Tinney, L. R.; Estes, J. E.
1975-01-01
Cropland inventories utilizing high altitude and Landsat imagery were conducted in Kern County, California. It was found that in terms of the overall mean relative and absolute inventory accuracies, a Landsat multidate analysis yielded the most optimum results, i.e., 98% accuracy. The 1:125,000 CIR high altitude inventory is a serious alternative which can be very accurate (97% or more) if imagery is available for a specific study area. The operational remote sensing cropland inventories documented in this study are considered cost-effective. When compared to conventional survey costs of $62-66 per 10,000 acres, the Landsat and high-altitude inventories required only 3-5% of this amount, i.e., $1.97-2.98.
NASA Astrophysics Data System (ADS)
Zolot, A. M.; Giorgetta, F. R.; Baumann, E.; Swann, W. C.; Coddington, I.; Newbury, N. R.
2013-03-01
The Doppler-limited spectra of methane between 176 THz and 184 THz (5870-6130 cm-1) and acetylene between 193 THz and 199 THz (6430-6630 cm-1) are acquired via comb-tooth resolved dual comb spectroscopy with frequency accuracy traceable to atomic standards. A least squares analysis of the measured absorbance and phase line shapes provides line center frequencies with absolute accuracy of 0.2 MHz, or less than one thousandth of the room temperature Doppler width. This accuracy is verified through comparison with previous saturated absorption spectroscopy of 37 strong isolated lines of acetylene. For the methane spectrum, the center frequencies of 46 well-isolated strong lines are determined with similar high accuracy, along with the center frequencies for 1107 non-isolated lines at lower accuracy. The measured methane line-center frequencies have an uncertainty comparable to the few available laser heterodyne measurements in this region but span a much larger optical bandwidth, marking the first broad-band measurements of the methane 2ν3 region directly referenced to atomic frequency standards. This study demonstrates the promise of dual comb spectroscopy to obtain high resolution broadband spectra that are comparable to state-of-the-art Fourier-transform spectrometer measurements but with much improved frequency accuracy.Work of the US government, not subject to US copyright.
Guinan, Taryn M; Gustafsson, Ove J R; McPhee, Gordon; Kobus, Hilton; Voelcker, Nicolas H
2015-11-17
Nanostructure imaging mass spectrometry (NIMS) using porous silicon (pSi) is a key technique for molecular imaging of exogenous and endogenous low molecular weight compounds from fingerprints. However, high-mass-accuracy NIMS can be difficult to achieve as time-of-flight (ToF) mass analyzers, which dominate the field, cannot sufficiently compensate for shifts in measured m/z values. Here, we show internal recalibration using a thin layer of silver (Ag) sputter-coated onto functionalized pSi substrates. NIMS peaks for several previously reported fingerprint components were selected and mass accuracy was compared to theoretical values. Mass accuracy was improved by more than an order of magnitude in several cases. This straightforward method should form part of the standard guidelines for NIMS studies for spatial characterization of small molecules.
Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication.
Naveed, Mariam; Clary, Meredith; Ahn, Chul; Kubiliun, Nisa; Agrawal, Deepak; Cryer, Byron; Murphy, Caitlin; Singal, Amit G
2017-07-01
Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P <.001 for both). There was substantial agreement between endoscopist and referring provider impressions (κ=0.76). All 3 methods showed high sensitivity (>90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P =.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. Copyright © 2017 by the National Comprehensive Cancer Network.
Monitoring Building Deformation with InSAR: Experiments and Validation.
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-12-20
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
Diagnostic accuracy of imaging devices in glaucoma: A meta-analysis.
Fallon, Monica; Valero, Oliver; Pazos, Marta; Antón, Alfonso
Imaging devices such as the Heidelberg retinal tomograph-3 (HRT3), scanning laser polarimetry (GDx), and optical coherence tomography (OCT) play an important role in glaucoma diagnosis. A systematic search for evidence-based data was performed for prospective studies evaluating the diagnostic accuracy of HRT3, GDx, and OCT. The diagnostic odds ratio (DOR) was calculated. To compare the accuracy among instruments and parameters, a meta-analysis considering the hierarchical summary receiver-operating characteristic model was performed. The risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Studies in the context of screening programs were used for qualitative analysis. Eighty-six articles were included. The DOR values were 29.5 for OCT, 18.6 for GDx, and 13.9 for HRT. The heterogeneity analysis demonstrated statistically a significant influence of degree of damage and ethnicity. Studies analyzing patients with earlier glaucoma showed poorer results. The risk of bias was high for patient selection. Screening studies showed lower sensitivity values and similar specificity values when compared with those included in the meta-analysis. The classification capabilities of GDx, HRT, and OCT were high and similar across the 3 instruments. The highest estimated DOR was obtained with OCT. Diagnostic accuracy could be overestimated in studies including prediagnosed groups of subjects. Copyright © 2017 Elsevier Inc. All rights reserved.
A Flexible Analysis Tool for the Quantitative Acoustic Assessment of Infant Cry
Reggiannini, Brian; Sheinkopf, Stephen J.; Silverman, Harvey F.; Li, Xiaoxue; Lester, Barry M.
2015-01-01
Purpose In this article, the authors describe and validate the performance of a modern acoustic analyzer specifically designed for infant cry analysis. Method Utilizing known algorithms, the authors developed a method to extract acoustic parameters describing infant cries from standard digital audio files. They used a frame rate of 25 ms with a frame advance of 12.5 ms. Cepstral-based acoustic analysis proceeded in 2 phases, computing frame-level data and then organizing and summarizing this information within cry utterances. Using signal detection methods, the authors evaluated the accuracy of the automated system to determine voicing and to detect fundamental frequency (F0) as compared to voiced segments and pitch periods manually coded from spectrogram displays. Results The system detected F0 with 88% to 95% accuracy, depending on tolerances set at 10 to 20 Hz. Receiver operating characteristic analyses demonstrated very high accuracy at detecting voicing characteristics in the cry samples. Conclusions This article describes an automated infant cry analyzer with high accuracy to detect important acoustic features of cry. A unique and important aspect of this work is the rigorous testing of the system’s accuracy as compared to ground-truth manual coding. The resulting system has implications for basic and applied research on infant cry development. PMID:23785178
NASA Technical Reports Server (NTRS)
Alexander, R. H. (Principal Investigator); Fitzpatrick, K. A.
1975-01-01
The author has identified the following significant results. Level 2 land use maps produced at three scales (1:24,000, 1:100,000, and 1:250,000) from high altitude photography were compared with each other and with point data obtained in the field. The same procedures were employed to determine the accuracy of the Level 1 land use maps produced at 1:250,000 from high altitude photography and color composite ERTS imagery. Accuracy of the Level 2 maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000 and 73.0 percent at 1:250,000. Accuracy of the Level 1 1:250,000 maps was 76.5 percent for aerial photographs and 69.5 percent for ERTS imagery. The cost of Level 2 land use mapping at 1:24,000 was found to be high ($11.93 per sq km). The cost of mapping at 1:100,000 ($1.75) was about two times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent.
Gillian, Jeffrey K.; Karl, Jason W.; Elaksher, Ahmed; Duniway, Michael C.
2017-01-01
Structure-from-motion (SfM) photogrammetry from unmanned aerial system (UAS) imagery is an emerging tool for repeat topographic surveying of dryland erosion. These methods are particularly appealing due to the ability to cover large landscapes compared to field methods and at reduced costs and finer spatial resolution compared to airborne laser scanning. Accuracy and precision of high-resolution digital terrain models (DTMs) derived from UAS imagery have been explored in many studies, typically by comparing image coordinates to surveyed check points or LiDAR datasets. In addition to traditional check points, this study compared 5 cm resolution DTMs derived from fixed-wing UAS imagery with a traditional ground-based method of measuring soil surface change called erosion bridges. We assessed accuracy by comparing the elevation values between DTMs and erosion bridges along thirty topographic transects each 6.1 m long. Comparisons occurred at two points in time (June 2014, February 2015) which enabled us to assess vertical accuracy with 3314 data points and vertical precision (i.e., repeatability) with 1657 data points. We found strong vertical agreement (accuracy) between the methods (RMSE 2.9 and 3.2 cm in June 2014 and February 2015, respectively) and high vertical precision for the DTMs (RMSE 2.8 cm). Our results from comparing SfM-generated DTMs to check points, and strong agreement with erosion bridge measurements suggests repeat UAS imagery and SfM processing could replace erosion bridges for a more synoptic landscape assessment of shifting soil surfaces for some studies. However, while collecting the UAS imagery and generating the SfM DTMs for this study was faster than collecting erosion bridge measurements, technical challenges related to the need for ground control networks and image processing requirements must be addressed before this technique could be applied effectively to large landscapes.
Frouzan, Arash; Masoumi, Kambiz; Delirroyfard, Ali; Mazdaie, Behnaz; Bagherzadegan, Elnaz
2017-01-01
Background Long bone fractures are common injuries caused by trauma. Some studies have demonstrated that ultrasound has a high sensitivity and specificity in the diagnosis of upper and lower extremity long bone fractures. Objective The aim of this study was to determine the accuracy of ultrasound compared with plain radiography in diagnosis of upper and lower extremity long bone fractures in traumatic patients. Methods This cross-sectional study assessed 100 patients admitted to the emergency department of Imam Khomeini Hospital, Ahvaz, Iran with trauma to the upper and lower extremities, from September 2014 through October 2015. In all patients, first ultrasound and then standard plain radiography for the upper and lower limb was performed. Data were analyzed by SPSS version 21 to determine the specificity and sensitivity. Results The mean age of patients with upper and lower limb trauma were 31.43±12.32 years and 29.63±5.89 years, respectively. Radius fracture was the most frequent compared to other fractures (27%). Sensitivity, specificity, positive predicted value, and negative predicted value of ultrasound compared with plain radiography in the diagnosis of upper extremity long bones were 95.3%, 87.7%, 87.2% and 96.2%, respectively, and the highest accuracy was observed in left arm fractures (100%). Tibia and fibula fractures were the most frequent types compared to other fractures (89.2%). Sensitivity, specificity, PPV and NPV of ultrasound compared with plain radiography in the diagnosis of upper extremity long bone fractures were 98.6%, 83%, 65.4% and 87.1%, respectively, and the highest accuracy was observed in men, lower ages and femoral fractures. Conclusion The results of this study showed that ultrasound compared with plain radiography has a high accuracy in the diagnosis of upper and lower extremity long bone fractures. PMID:28979747
Training and quality assurance with the Structured Clinical Interview for DSM-IV (SCID-I/P).
Ventura, J; Liberman, R P; Green, M F; Shaner, A; Mintz, J
1998-06-15
Accuracy in psychiatric diagnosis is critical for evaluating the suitability of the subjects for entry into research protocols and for establishing comparability of findings across study sites. However, training programs in the use of diagnostic instruments for research projects are not well systematized. Furthermore, little information has been published on the maintenance of interrater reliability of diagnostic assessments. At the UCLA Research Center for Major Mental Illnesses, a Training and Quality Assurance Program for SCID interviewers was used to evaluate interrater reliability and diagnostic accuracy. Although clinically experienced interviewers achieved better interrater reliability and overall diagnostic accuracy than neophyte interviewers, both groups were able to achieve and maintain high levels of interrater reliability, diagnostic accuracy, and interviewer skill. At the first quality assurance check after training, there were no significant differences between experienced and neophyte interviewers in interrater reliability or diagnostic accuracy. Standardization of training and quality assurance procedures within and across research projects may make research findings from study sites more comparable.
Empathic Embarrassment Accuracy in Autism Spectrum Disorder.
Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G
2015-06-01
Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Parent, Francois; Loranger, Sebastien; Mandal, Koushik Kanti; Iezzi, Victor Lambin; Lapointe, Jerome; Boisvert, Jean-Sébastien; Baiad, Mohamed Diaa; Kadoury, Samuel; Kashyap, Raman
2017-04-01
We demonstrate a novel approach to enhance the precision of surgical needle shape tracking based on distributed strain sensing using optical frequency domain reflectometry (OFDR). The precision enhancement is provided by using optical fibers with high scattering properties. Shape tracking of surgical tools using strain sensing properties of optical fibers has seen increased attention in recent years. Most of the investigations made in this field use fiber Bragg gratings (FBG), which can be used as discrete or quasi-distributed strain sensors. By using a truly distributed sensing approach (OFDR), preliminary results show that the attainable accuracy is comparable to accuracies reported in the literature using FBG sensors for tracking applications (~1mm). We propose a technique that enhanced our accuracy by 47% using UV exposed fibers, which have higher light scattering compared to un-exposed standard single mode fibers. Improving the experimental setup will enhance the accuracy provided by shape tracking using OFDR and will contribute significantly to clinical applications.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1-98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O`Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1–98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting. PMID:26262681
NASA Astrophysics Data System (ADS)
Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton
2014-08-01
Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
ERIC Educational Resources Information Center
Wallace, Gregory L.; Case, Laura K.; Harms, Madeline B.; Silvers, Jennifer A.; Kenworthy, Lauren; Martin, Alex
2011-01-01
Prior studies implicate facial emotion recognition (FER) difficulties among individuals with autism spectrum disorders (ASD); however, many investigations focus on FER accuracy alone and few examine ecological validity through links with everyday functioning. We compared FER accuracy and perceptual sensitivity (from neutral to full expression)…
Developing Local Oral Reading Fluency Cut Scores for Predicting High-Stakes Test Performance
ERIC Educational Resources Information Center
Grapin, Sally L.; Kranzler, John H.; Waldron, Nancy; Joyce-Beaulieu, Diana; Algina, James
2017-01-01
This study evaluated the classification accuracy of a second grade oral reading fluency curriculum-based measure (R-CBM) in predicting third grade state test performance. It also compared the long-term classification accuracy of local and publisher-recommended R-CBM cut scores. Participants were 266 students who were divided into a calibration…
Special Educators and Data Recording: What's Delayed Recording Got to Do With It?
ERIC Educational Resources Information Center
Jasper, Andrea D.; Taber Doughty, Teresa
2015-01-01
This study examined the effects of delayed recording on the accuracy of data recorded by special educators serving students with high- or low-incidence disabilities. A multi-element design was used to compare the accuracy of data recorded across three conditions: (a) immediately after a student's target behavior occurred, (b) immediately after the…
Machine learning of molecular properties: Locality and active learning
NASA Astrophysics Data System (ADS)
Gubaev, Konstantin; Podryabinkin, Evgeny V.; Shapeev, Alexander V.
2018-06-01
In recent years, the machine learning techniques have shown great potent1ial in various problems from a multitude of disciplines, including materials design and drug discovery. The high computational speed on the one hand and the accuracy comparable to that of density functional theory on another hand make machine learning algorithms efficient for high-throughput screening through chemical and configurational space. However, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so-called outliers—the out-of-sample molecules, not well-represented in the training set. In the present paper, we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues: it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. We compare our model to the other state-of-the-art algorithms from the literature on the widely used benchmark tests.
Därr, Roland; Kuhn, Matthias; Bode, Christoph; Bornstein, Stefan R; Pacak, Karel; Lenders, Jacques W M; Eisenhofer, Graeme
2017-06-01
To determine the accuracy of biochemical tests for the diagnosis of pheochromocytoma and paraganglioma. A search of the PubMed database was conducted for English-language articles published between October 1958 and December 2016 on the biochemical diagnosis of pheochromocytoma and paraganglioma using immunoassay methods or high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection for measurement of fractionated metanephrines in 24-h urine collections or plasma-free metanephrines obtained under seated or supine blood sampling conditions. Application of the Standards for Reporting of Diagnostic Studies Accuracy Group criteria yielded 23 suitable articles. Summary receiver operating characteristic analysis revealed sensitivities/specificities of 94/93% and 91/93% for measurement of plasma-free metanephrines and urinary fractionated metanephrines using high-performance liquid chromatography or immunoassay methods, respectively. Partial areas under the curve were 0.947 vs. 0.911. Irrespective of the analytical method, sensitivity was significantly higher for supine compared with seated sampling, 95 vs. 89% (p < 0.02), while specificity was significantly higher for supine sampling compared with 24-h urine, 95 vs. 90% (p < 0.03). Partial areas under the curve were 0.942, 0.913, and 0.932 for supine sampling, seated sampling, and urine. Test accuracy increased linearly from 90 to 93% for 24-h urine at prevalence rates of 0.0-1.0, decreased linearly from 94 to 89% for seated sampling and was constant at 95% for supine conditions. Current tests for the biochemical diagnosis of pheochromocytoma and paraganglioma show excellent diagnostic accuracy. Supine sampling conditions and measurement of plasma-free metanephrines using high-performance liquid chromatography with coulometric/electrochemical or tandem mass spectrometric detection provides the highest accuracy at all prevalence rates.
Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore
Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.
2013-01-01
The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.
Krüger-Gottschalk, Antje; Knaevelsrud, Christine; Rau, Heinrich; Dyer, Anne; Schäfer, Ingo; Schellong, Julia; Ehring, Thomas
2017-11-28
The Posttraumatic Stress Disorder (PTSD) Checklist (PCL, now PCL-5) has recently been revised to reflect the new diagnostic criteria of the disorder. A clinical sample of trauma-exposed individuals (N = 352) was assessed with the Clinician Administered PTSD Scale for DSM-5 (CAPS-5) and the PCL-5. Internal consistencies and test-retest reliability were computed. To investigate diagnostic accuracy, we calculated receiver operating curves. Confirmatory factor analyses (CFA) were performed to analyze the structural validity. Results showed high internal consistency (α = .95), high test-retest reliability (r = .91) and a high correlation with the total severity score of the CAPS-5, r = .77. In addition, the recommended cutoff of 33 on the PCL-5 showed high diagnostic accuracy when compared to the diagnosis established by the CAPS-5. CFAs comparing the DSM-5 model with alternative models (the three-factor solution, the dysphoria, anhedonia, externalizing behavior and hybrid model) to account for the structural validity of the PCL-5 remained inconclusive. Overall, the findings show that the German PCL-5 is a reliable instrument with good diagnostic accuracy. However, more research evaluating the underlying factor structure is needed.
Wattjes, Mike P; Barkhof, Frederik
2012-11-01
High field MRI operating at 3 T is increasingly being used in the field of neuroradiology on the grounds that higher magnetic field strength should theoretically lead to a higher diagnostic accuracy in the diagnosis of several disease entities. This Editorial discusses the exhaustive review by Wardlaw and colleagues of research comparing 3 T MRI with 1.5 T MRI in the field of neuroradiology. Interestingly, the authors found no convincing evidence of improved image quality, diagnostic accuracy, or reduced total examination times using 3 T MRI instead of 1.5 T MRI. These findings are highly relevant since a new generation of high field MRI systems operating at 7 T has recently been introduced. • Higher magnetic field strengths do not necessarily lead to a better diagnostic accuracy. • Disadvantages of high field MR systems have to be considered in clinical practice. • Higher field strengths are needed for functional imaging, spectroscopy, etc. • Disappointingly there are few direct comparisons of 1.5 and 3 T MRI. • Whether the next high field MR generation (7 T) will improve diagnostic accuracy has to be investigated.
Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients.
Iyengar, S S; Morgan-Hughes, G; Ukoumunne, O; Clayton, B; Davies, E J; Nikolaou, V; Hyde, C J; Shore, A C; Roobottom, C A
2016-02-01
To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥ 50%) stenosis and secondarily as the presence of severe (≥ 70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)
Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne
2017-01-01
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other land cover change accuracy assessments.
Evaluating arguments during instigations of defence motivation and accuracy motivation.
Liu, Cheng-Hong
2017-05-01
When people evaluate the strength of an argument, their motivations are likely to influence the evaluation. However, few studies have specifically investigated the influences of motivational factors on argument evaluation. This study examined the effects of defence and accuracy motivations on argument evaluation. According to the compatibility between the advocated positions of arguments and participants' prior beliefs and the objective strength of arguments, participants evaluated four types of arguments: compatible-strong, compatible-weak, incompatible-strong, and incompatible-weak arguments. Experiment 1 revealed that participants possessing a high defence motivation rated compatible-weak arguments as stronger and incompatible-strong ones as weaker than participants possessing a low defence motivation. However, the strength ratings between the high and low defence groups regarding both compatible-strong and incompatible-weak arguments were similar. Experiment 2 revealed that when participants possessed a high accuracy motivation, they rated compatible-weak arguments as weaker and incompatible-strong ones as stronger than when they possessed a low accuracy motivation. However, participants' ratings on both compatible-strong and incompatible-weak arguments were similar when comparing high and low accuracy conditions. The results suggest that defence and accuracy motivations are two major motives influencing argument evaluation. However, they primarily influence the evaluation results for compatible-weak and incompatible-strong arguments, but not for compatible-strong and incompatible-weak arguments. © 2016 The British Psychological Society.
A Novel Energy-Efficient Approach for Human Activity Recognition.
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru
2017-09-08
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Li, J.; Wan, Y.; Gao, X.
2012-07-01
With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
Longcroft-Wheaton, G; Brown, J; Cowlishaw, D; Higgins, B; Bhandari, P
2012-10-01
The resolution of endoscopes has increased in recent years. Modern Fujinon colonoscopes have a charge-coupled device (CCD) pixel density of 650,000 pixels compared with the 410,000 pixel CCD in standard-definition scopes. Acquiring high-definition scopes represents a significant capital investment and their clinical value remains uncertain. The aim of the current study was to investigate the impact of high-definition endoscopes on the in vivo histology prediction of colonic polyps. Colonoscopy procedures were performed using Fujinon colonoscopes and EPX-4400 processor. Procedures were randomized to be performed using either a standard-definition EC-530 colonoscope or high-definition EC-530 and EC-590 colonoscopes. Polyps of <10 mm were assessed using both white light imaging (WLI) and flexible spectral imaging color enhancement (FICE), and the predicted diagnosis was recorded. Polyps were removed and sent for histological analysis by a pathologist who was blinded to the endoscopic diagnosis. The predicted diagnosis was compared with the histology to calculate the accuracy, sensitivity, and specificity of in vivo assessment using either standard or high-definition scopes. A total of 293 polyps of <10 mm were examined–150 polyps using the standard-definition colonoscope and 143 polyps using high-definition colonoscopes. There was no difference in sensitivity, specificity or accuracy between the two scopes when WLI was used (standard vs. high: accuracy 70% [95% CI 62–77] vs. 73% [95% CI 65–80]; P=0.61). When FICE was used, high-definition colonoscopes showed a sensitivity of 93% compared with 83% for standard-definition colonoscopes (P=0.048); specificity was 81% and 82%, respectively. There was no difference between high- and standard-definition colonoscopes when white light was used, but FICE significantly improved the in vivo diagnosis of small polyps when high-definition scopes were used compared with standard definition.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
Monitoring Building Deformation with InSAR: Experiments and Validation
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-01-01
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403
Lin, You-Yu; Hsieh, Chia-Hung; Chen, Jiun-Hong; Lu, Xuemei; Kao, Jia-Horng; Chen, Pei-Jer; Chen, Ding-Shinn; Wang, Hurng-Yi
2017-04-26
The accuracy of metagenomic assembly is usually compromised by high levels of polymorphism due to divergent reads from the same genomic region recognized as different loci when sequenced and assembled together. A viral quasispecies is a group of abundant and diversified genetically related viruses found in a single carrier. Current mainstream assembly methods, such as Velvet and SOAPdenovo, were not originally intended for the assembly of such metagenomics data, and therefore demands for new methods to provide accurate and informative assembly results for metagenomic data. In this study, we present a hybrid method for assembling highly polymorphic data combining the partial de novo-reference assembly (PDR) strategy and the BLAST-based assembly pipeline (BBAP). The PDR strategy generates in situ reference sequences through de novo assembly of a randomly extracted partial data set which is subsequently used for the reference assembly for the full data set. BBAP employs a greedy algorithm to assemble polymorphic reads. We used 12 hepatitis B virus quasispecies NGS data sets from a previous study to assess and compare the performance of both PDR and BBAP. Analyses suggest the high polymorphism of a full metagenomic data set leads to fragmentized de novo assembly results, whereas the biased or limited representation of external reference sequences included fewer reads into the assembly with lower assembly accuracy and variation sensitivity. In comparison, the PDR generated in situ reference sequence incorporated more reads into the final PDR assembly of the full metagenomics data set along with greater accuracy and higher variation sensitivity. BBAP assembly results also suggest higher assembly efficiency and accuracy compared to other assembly methods. Additionally, BBAP assembly recovered HBV structural variants that were not observed amongst assembly results of other methods. Together, PDR/BBAP assembly results were significantly better than other compared methods. Both PDR and BBAP independently increased the assembly efficiency and accuracy of highly polymorphic data, and assembly performances were further improved when used together. BBAP also provides nucleotide frequency information. Together, PDR and BBAP provide powerful tools for metagenomic data studies.
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
ERIC Educational Resources Information Center
Morris, Darrell; Pennell, Ashley M.; Perney, Jan; Trathen, Woodrow
2018-01-01
This study compared reading rate to reading fluency (as measured by a rating scale). After listening to first graders read short passages, we assigned an overall fluency rating (low, average, or high) to each reading. We then used predictive discriminant analyses to determine which of five measures--accuracy, rate (objective); accuracy, phrasing,…
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
NASA Astrophysics Data System (ADS)
Boudria, Yacine; Feltane, Amal; Besio, Walter
2014-06-01
Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.
Accuracy of four commonly used color vision tests in the identification of cone disorders.
Thiadens, Alberta A H J; Hoyng, Carel B; Polling, Jan Roelof; Bernaerts-Biskop, Riet; van den Born, L Ingeborgh; Klaver, Caroline C W
2013-04-01
To determine which color vision test is most appropriate for the identification of cone disorders. In a clinic-based study, four commonly used color vision tests were compared between patients with cone dystrophy (n = 37), controls with normal visual acuity (n = 35), and controls with low vision (n = 39) and legal blindness (n = 11). Mean outcome measures were specificity, sensitivity, positive predictive value and discriminative accuracy of the Ishihara test, Hardy-Rand-Rittler (HRR) test, and the Lanthony and Farnsworth Panel D-15 tests. In the comparison between cone dystrophy and all controls, sensitivity, specificity and predictive value were highest for the HRR and Ishihara tests. When patients were compared to controls with normal vision, discriminative accuracy was highest for the HRR test (c-statistic for PD-axes 1, for T-axis 0.851). When compared to controls with poor vision, discriminative accuracy was again highest for the HRR test (c-statistic for PD-axes 0.900, for T-axis 0.766), followed by the Lanthony Panel D-15 test (c-statistic for PD-axes 0.880, for T-axis 0.500) and Ishihara test (c-statistic 0.886). Discriminative accuracies of all tests did not further decrease when patients were compared to controls who were legally blind. The HRR, Lanthony Panel D-15 and Ishihara all have a high discriminative accuracy to identify cone disorders, but the highest scores were for the HRR test. Poor visual acuity slightly decreased the accuracy of all tests. Our advice is to use the HRR test since this test also allows for evaluation of all three color axes and quantification of color defects.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Mayrhofer, Thomas; Puchner, Stefan B.; Lu, Michael T.; Maurovich-Horvat, Pal; Pope, J. Hector; Truong, Quynh A.; Udelson, James E.; Peacock, W. Frank; White, Charles S.; Woodard, Pamela K.; Fleg, Jerome L.; Nagurney, John T.; Januzzi, James L.; Hoffmann, Udo
2015-01-01
Objectives We compared diagnostic accuracy of conventional troponin/traditional coronary artery disease (CAD) assessment and highly sensitive troponin (hsTn) I/advanced CAD assessment for acute coronary syndrome (ACS) during the index hospitalization. Background HsTn I and advanced assessment of CAD using coronary computed tomography angiography (CTA) are promising candidates to improve the accuracy of emergency department (ED) evaluation of patients with suspected ACS. Methods We performed an observational cohort study in patients with suspected ACS enrolled in the ROMICAT II trial and randomized to coronary CTA who also had hsTn I measurement at the time of the ED presentation. We assessed coronary CTA for traditional (no CAD, non-obstructive CAD, ≥50% stenosis) and advanced features of CAD (≥50% stenosis, high-risk plaque features: positive remodeling, low <30 Hounsfield Units plaque, napkin ring sign, spotty calcium). Results Of 160 patients (mean age: 53±8 years, 40% women) 10.6% were diagnosed with ACS. The ACS rate in patients with HsTn I below the limit of detection (n=9, 5.6%), intermediate (n=139, 86.9%), and above the 99th percentile (n=12, 7.5%) was 0%, 8.6%, and 58.3%, respectively. Absence of ≥50% stenosis and high-risk plaque ruled out ACS in patients with intermediate hsTn I (n=87, 54.4%; ACS rate 0%), while patients with both ≥50% stenosis and high-risk plaque were at high risk (n=13, 8.1%; ACS rate 69.2%) and patients with either ≥50% stenosis or high-risk plaque were at intermediate risk for ACS (n=39, 24.4%; ACS rate 7.7%). HsTn I/advanced coronary CTA assessment significantly improved the diagnostic accuracy for ACS as compared to conventional troponin/traditional coronary CTA (AUC 0.84, 95%CI 0.80-0.88 vs. 0.74, 95%CI 0.70-0.78; p<0.001). Conclusions HsTn I at the time of presentation followed by early advanced coronary CTA assessment improves the risk stratification and diagnostic accuracy for ACS as compared to conventional troponin and traditional coronary CTA assessment. (Multicenter Study to Rule Out Myocardial Infarction/Ischemia by Cardiac Computed Tomography [ROMICAT-II]; NCT01084239) PMID:26476506
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Reddy, S Srikanth; Revathi, Kakkirala; Reddy, S Kranthikumar
2013-01-01
Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
Land use/land cover mapping using multi-scale texture processing of high resolution data
NASA Astrophysics Data System (ADS)
Wong, S. N.; Sarker, M. L. R.
2014-02-01
Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.
Fitzpatrick, Katherine A.
1975-01-01
Accuracy analyses for the land use maps of the Central Atlantic Regional Ecological Test Site were performed for a 1-percent sample of the area. Researchers compared Level II land use maps produced at three scales, 1:24,000, 1:100,000, and 1:250,000 from high-altitude photography, with each other and with point data obtained in the field. They employed the same procedures to determine the accuracy of the Level I land use maps produced at 1:250,000 from high-altitude photography and color composite ERTS imagery. The accuracy of the Level II maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000, and 73.0 percent at 1:250,000. The accuracy of the Level I 1:250,000 maps produced from high-altitude aircraft photography was 76.5 percent and for those produced from ERTS imagery was 69.5 percent The cost of Level II land use mapping at 1:24,000 was found to be high ($11.93 per km2 ). The cost of mapping at 1:100,000 ($1.75) was about 2 times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent. Level I land use maps, when mapped from highaltitude photography, were about 4 times as expensive as the maps produced from ERTS imagery, although the accuracy is 7.0 percent greater. The Level I land use category that is least accurately mapped from ERTS imagery is urban and built-up land in the non-urban areas; in the urbanized areas, built-up land is more reliably mapped.
Bienek, Diane R; Charlton, David G
2012-05-01
Being able to test for the presence of blood pathogens at forward locations could reduce morbidity and mortality in the field. Rapid, user-friendly blood typing kits for detecting Human Immunodeficiency Virus (HIV), Hepatitis C Virus (HCV), and Hepatitis B Virus (HBV) were evaluated to determine their accuracy after storage at various temperatures/humidities. Rates of positive tests of control groups, experimental groups, and industry standards were compared (Fisher's exact chi2, p < or = 0.05). Compared to the control group, 2 of 10 HIV detection devices were adversely affected by exposure to high temperature/high humidity or high temperature/low humidity. With one exception, none of the environmentally exposed HCV or HBV detection devices exhibited significant differences compared to those stored under control conditions. For HIV, HCV, and HBV devices, there were differences compared to the industry standard. Collectively, this evaluation of pathogen detection kits revealed that diagnostic performance varies among products and storage conditions, and that the tested products cannot be considered to be approved for use to screen blood, plasma, cell, or tissue donors.
NASA Technical Reports Server (NTRS)
Mahoney, M. J.; Ismail, S.; Browell, E. V.; Ferrare, R. A.; Kooi, S. A.; Brasseur, L.; Notari, A.; Petway, L.; Brackett, V.; Clayton, M.;
2002-01-01
LASE measures high resolution moisture, aerosol, and cloud distributions not available from conventional observations. LASE water vapor measurements were compared with dropsondes to evaluate their accuracy. LASE water vapor measurements were used to assess the capability of hurricane models to improve their track accuracy by 100 km on 3 day forecasts using Florida State University models.
Random forests for classification in ecology
Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J.
2007-01-01
Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature. ?? 2007 by the Ecological Society of America.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
Fuzzy PID control algorithm based on PSO and application in BLDC motor
NASA Astrophysics Data System (ADS)
Lin, Sen; Wang, Guanglong
2017-06-01
A fuzzy PID control algorithm is studied based on improved particle swarm optimization (PSO) to perform Brushless DC (BLDC) motor control which has high accuracy, good anti-jamming capability and steady state accuracy compared with traditional PID control. The mathematical and simulation model is established for BLDC motor by simulink software, and the speed loop of the fuzzy PID controller is designed. The simulation results show that the fuzzy PID control algorithm based on PSO has higher stability, high control precision and faster dynamic response speed.
Gholkar, Nikhil Shirish; Saha, Subhas Chandra; Prasad, GRV; Bhattacharya, Anish; Srinivasan, Radhika; Suri, Vanita
2014-01-01
Lymph nodal (LN) metastasis is the most important prognostic factor in high-risk endometrial cancer. However, the benefit of routine lymphadenectomy in endometrial cancer is controversial. This study was conducted to assess the accuracy of [18F] fluorodeoxyglucose-positron emission tomography/computed tomography ([18F] FDG-PET/CT) in detection of pelvic and para-aortic nodal metastases in high-risk endometrial cancer. 20 patients with high-risk endometrial carcinoma underwent [18F] FDG-PET/CT followed by total abdominal hysterectomy, bilateral salpingo-oophorectomy and systematic pelvic lymphadenectomy with or without para-aortic lymphadenectomy. The findings on histopathology were compared with [18F] FDG-PET/CT findings to calculate the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of [18F] FDG-PET/CT. The pelvic nodal findings were analyzed on a patient and nodal chain based criteria. The para-aortic nodal findings were reported separately. Histopathology documented nodal involvement in two patients (10%). For detection of pelvic nodes, on a patient based analysis, [18F] FDG-PET/CT had a sensitivity of 100%, specificity of 61.11%, PPV of 22.22%, NPV of 100% and accuracy of 65% and on a nodal chain based analysis, [18F] FDG-PET/CT had a sensitivity of 100%, specificity of 80%, PPV of 20%, NPV of 100%, and accuracy of 80.95%. For detection of para-aortic nodes, [18F] FDG-PET/CT had sensitivity of 100%, specificity of 66.67%, PPV of 20%, NPV of 100%, and accuracy of 69.23%. Although [18F] FDG-PET/CT has high sensitivity for detection of LN metastasis in endometrial carcinoma, it had moderate accuracy and high false positivity. However, the high NPV is important in selecting patients in whom lymphadenectomy may be omitted. PMID:25538488
Assessing the dependence of sensitivity and specificity on prevalence in meta-analysis
Li, Jialiang; Fine, Jason P.
2011-01-01
We consider modeling the dependence of sensitivity and specificity on the disease prevalence in diagnostic accuracy studies. Many meta-analyses compare test accuracy across studies and fail to incorporate the possible connection between the accuracy measures and the prevalence. We propose a Pearson type correlation coefficient and an estimating equation–based regression framework to help understand such a practical dependence. The results we derive may then be used to better interpret the results from meta-analyses. In the biomedical examples analyzed in this paper, the diagnostic accuracy of biomarkers are shown to be associated with prevalence, providing insights into the utility of these biomarkers in low- and high-prevalence populations. PMID:21525421
Nedelcu, Robert; Olsson, Pontus; Nyström, Ingela; Thor, Andreas
2018-02-23
Several studies have evaluated accuracy of intraoral scanners (IOS), but data is lacking regarding variations between IOS systems in the depiction of the critical finish line and the finish line accuracy. The aim of this study was to analyze the level of finish line distinctness (FLD), and finish line accuracy (FLA), in 7 intraoral scanners (IOS) and one conventional impression (IMPR). Furthermore, to assess parameters of resolution, tessellation, topography, and color. A dental model with a crown preparation including supra and subgingival finish line was reference-scanned with an industrial scanner (ATOS), and scanned with seven IOS: 3M, CS3500 and CS3600, DWIO, Omnicam, Planscan and Trios. An IMPR was taken and poured, and the model was scanned with a laboratory scanner. The ATOS scan was cropped at finish line and best-fit aligned for 3D Compare Analysis (Geomagic). Accuracy was visualized, and descriptive analysis was performed. All IOS, except Planscan, had comparable overall accuracy, however, FLD and FLA varied substantially. Trios presented the highest FLD, and with CS3600, the highest FLA. 3M, and DWIO had low overall FLD and low FLA in subgingival areas, whilst Planscan had overall low FLD and FLA, as well as lower general accuracy. IMPR presented high FLD, except in subgingival areas, and high FLA. Trios had the highest resolution by factor 1.6 to 3.1 among IOS, followed by IMPR, DWIO, Omnicam, CS3500, 3M, CS3600 and Planscan. Tessellation was found to be non-uniform except in 3M and DWIO. Topographic variation was found for 3M and Trios, with deviations below +/- 25 μm for Trios. Inclusion of color enhanced the identification of the finish line in Trios, Omnicam and CS3600, but not in Planscan. There were sizeable variations between IOS with both higher and lower FLD and FLA than IMPR. High FLD was more related to high localized finish line resolution and non-uniform tessellation, than to high overall resolution. Topography variations were low. Color improved finish line identification in some IOS. It is imperative that clinicians critically evaluate the digital impression, being aware of varying technical limitations among IOS, in particular when challenging subgingival conditions apply.
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
NASA Astrophysics Data System (ADS)
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
Liu, Guanyu; McNally, Richard J
2017-03-01
Consolidated memories become labile upon reactivation and as a result have to go through reconsolidation to become re-stabilized. This property of memory may potentially be used to reduce the impact of highly negative episodic memories. Because detailed and vivid negative memories are mediated by high arousal, if arousal is lessened during reconsolidation, memory accuracy and vividness should diminish. In this study, we examined this hypothesis. Participants (N = 72) viewed a stressful, suspenseful video on Day 1 to develop negative episodic memories. Then, 24-29 h later, they saw a brief reminder of the stressful video (or not), and then viewed a neutral, calming (or positive) video. Another 24-29 h later, participants were tested on the accuracy, vividness, and anxiety associated with their memory of the stressful video on Day 1. Participants who watched the reminder and then the neutral video showed reduced memory accuracy compared to participants in the other groups. Despite the reduction in memory accuracy, their memory vividness and anxiety associated with the stressful video did not decrease. The use of undergraduates prevents generalizations to clinical populations. Also, the study did not test long-term memories that were more than 2 days old. Neutral mood induction during reconsolidation reduces the accuracy of highly negative episodic memories. Copyright © 2016 Elsevier Ltd. All rights reserved.
Matías-Guiu, Jordi A; Valles-Salgado, María; Rognoni, Teresa; Hamre-Gil, Frank; Moreno-Ramos, Teresa; Matías-Guiu, Jorge
2017-01-01
Our aim was to evaluate and compare the diagnostic properties of 5 screening tests for the diagnosis of mild Alzheimer disease (AD). We conducted a prospective and cross-sectional study of 92 patients with mild AD and of 68 healthy controls from our Department of Neurology. The diagnostic properties of the following tests were compared: Mini-Mental State Examination (MMSE), Addenbrooke's Cognitive Examination III (ACE-III), Memory Impairment Screen (MIS), Montreal Cognitive Assessment (MoCA), and Rowland Universal Dementia Assessment Scale (RUDAS). All tests yielded high diagnostic accuracy, with the ACE-III achieving the best diagnostic properties. The area under the curve was 0.897 for the ACE-III, 0.889 for the RUDAS, 0.874 for the MMSE, 0.866 for the MIS, and 0.856 for the MoCA. The Mini-ACE score from the ACE-III showed the highest diagnostic capacity (area under the curve 0.939). Memory scores of the ACE-III and of the RUDAS showed a better diagnostic accuracy than those of the MMSE and of the MoCA. All tests, especially the ACE-III, conveyed a higher diagnostic accuracy in patients with full primary education than in the less educated group. Implementing normative data improved the diagnostic accuracy of the ACE-III but not that of the other tests. The ACE-III achieved the highest diagnostic accuracy. This better discrimination was more evident in the more educated group. © 2017 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Liu, J.
2017-12-01
Accurately estimate of ET is crucial for studies of land-atmosphere interactions. A series of ET products have been developed recently relying on various simulation methods, however, uncertainties in accuracy of products limit their implications. In this study, accuracies of total 8 popular global ET products simulated based on satellite retrieves (ETMODIS and ETZhang), reanalysis (ETJRA55), machine learning method (ETJung) and land surface models (ETCLM, ETMOS, ETNoah and ETVIC) forcing by Global Land Data Assimilation System (GLDAS), respectively, were comprehensively evaluated against observations from eddy covariance FLUXNET sites by yearly, land cover and climate zones. The result shows that all simulated ET products tend to underestimate in the lower ET ranges or overestimate in higher ET ranges compared with ET observations. Through the examining of four statistic criterias, the root mean square error (RMSE), mean bias error (MBE), R2, and Taylor skill score (TSS), ETJung provided a high performance whether yearly or land cover or climatic zones. Satellite based ET products also have impressive performance. ETMODIS and ETZhang present comparable accuracy, while were skilled for different land cover and climate zones, respectively. Generally, the ET products from GLDAS show reasonable accuracy, despite ETCLM has relative higher RMSE and MBE for yearly, land cover and climate zones comparisons. Although the ETJRA55 shows comparable R2 with other products, its performance was constraint by the high RMSE and MBE. Knowledge from this study is crucial for ET products improvement and selection when they were used.
A Novel Energy-Efficient Approach for Human Activity Recognition
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin
2017-01-01
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560
Atropos: specific, sensitive, and speedy trimming of sequencing reads.
Didion, John P; Martin, Marcel; Collins, Francis S
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.
Atropos: specific, sensitive, and speedy trimming of sequencing reads
Collins, Francis S.
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074
Edwards, Jan; Beckman, Mary E; Munson, Benjamin
2004-04-01
Adults' performance on a variety of tasks suggests that phonological processing of nonwords is grounded in generalizations about sublexical patterns over all known words. A small body of research suggests that children's phonological acquisition is similarly based on generalizations over the lexicon. To test this account, production accuracy and fluency were examined in nonword repetitions by 104 children and 22 adults. Stimuli were 22 pairs of nonwords, in which one nonword contained a low-frequency or unattested two-phoneme sequence and the other contained a high-frequency sequence. For a subset of these nonword pairs, segment durations were measured. The same sound was produced with a longer duration (less fluently) when it appeared in a low-frequency sequence, as compared to a high-frequency sequence. Low-frequency sequences were also repeated with lower accuracy than high-frequency sequences. Moreover, children with smaller vocabularies showed a larger influence of frequency on accuracy than children with larger vocabularies. Taken together, these results provide support for a model of phonological acquisition in which knowledge of sublexical units emerges from generalizations made over lexical items.
An oscillation-free flow solver based on flux reconstruction
NASA Astrophysics Data System (ADS)
Aguerre, Horacio J.; Pairetti, Cesar I.; Venier, Cesar M.; Márquez Damián, Santiago; Nigro, Norberto M.
2018-07-01
In this paper, a segregated algorithm is proposed to suppress high-frequency oscillations in the velocity field for incompressible flows. In this context, a new velocity formula based on a reconstruction of face fluxes is defined eliminating high-frequency errors. In analogy to the Rhie-Chow interpolation, this approach is equivalent to including a flux-based pressure gradient with a velocity diffusion in the momentum equation. In order to guarantee second-order accuracy of the numerical solver, a set of conditions are defined for the reconstruction operator. To arrive at the final formulation, an outlook over the state of the art regarding velocity reconstruction procedures is presented comparing them through an error analysis. A new operator is then obtained by means of a flux difference minimization satisfying the required spatial accuracy. The accuracy of the new algorithm is analyzed by performing mesh convergence studies for unsteady Navier-Stokes problems with analytical solutions. The stabilization properties of the solver are then tested in a problem where spurious numerical oscillations arise for the velocity field. The results show a remarkable performance of the proposed technique eliminating high-frequency errors without losing accuracy.
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
NASA Technical Reports Server (NTRS)
Carpenter, M. H.
1988-01-01
The generalized chemistry version of the computer code SPARK is extended to include two higher-order numerical schemes, yielding fourth-order spatial accuracy for the inviscid terms. The new and old formulations are used to study the influences of finite rate chemical processes on nozzle performance. A determination is made of the computationally optimum reaction scheme for use in high-enthalpy nozzles. Finite rate calculations are compared with the frozen and equilibrium limits to assess the validity of each formulation. In addition, the finite rate SPARK results are compared with the constant ratio of specific heats (gamma) SEAGULL code, to determine its accuracy in variable gamma flow situations. Finally, the higher-order SPARK code is used to calculate nozzle flows having species stratification. Flame quenching occurs at low nozzle pressures, while for high pressures, significant burning continues in the nozzle.
NASA Astrophysics Data System (ADS)
Hall-Brown, Mary
The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.
Tan, Xiao Wei; Zheng, Qishi; Shi, Luming; Gao, Fei; Allen, John Carson; Coenen, Adriaan; Baumann, Stefan; Schoepf, U Joseph; Kassab, Ghassan S; Lim, Soo Teik; Wong, Aaron Sung Lung; Tan, Jack Wei Chieh; Yeo, Khung Keong; Chin, Chee Tang; Ho, Kay Woon; Tan, Swee Yaw; Chua, Terrance Siang Jin; Chan, Edwin Shih Yen; Tan, Ru San; Zhong, Liang
2017-06-01
To evaluate the combined diagnostic accuracy of coronary computed tomography angiography (CCTA) and computed tomography derived fractional flow reserve (FFRct) in patients with suspected or known coronary artery disease (CAD). PubMed, The Cochrane library, Embase and OpenGray were searched to identify studies comparing diagnostic accuracy of CCTA and FFRct. Diagnostic test measurements of FFRct were either extracted directly from the published papers or calculated from provided information. Bivariate models were conducted to synthesize the diagnostic performance of combined CCTA and FFRct at both "per-vessel" and "per-patient" levels. 7 articles were included for analysis. The combined diagnostic outcomes from "both positive" strategy, i.e. a subject was considered as "positive" only when both CCTA and FFRct were "positive", demonstrated relative high specificity (per-vessel: 0.91; per-patient: 0.81), high positive likelihood ratio (LR+, per-vessel: 7.93; per-patient: 4.26), high negative likelihood ratio (LR-, per-vessel: 0.30; per patient: 0.24) and high accuracy (per-vessel: 0.91; per-patient: 0.81) while "either positive" strategy, i.e. a subject was considered as "positive" when either CCTA or FFRct was "positive", demonstrated relative high sensitivity (per-vessel: 0.97; per-patient: 0.98), low LR+ (per-vessel: 1.50; per-patient: 1.17), low LR- (per-vessel: 0.07; per-patient: 0.09) and low accuracy (per-vessel: 0.57; per-patient: 0.54). "Both positive" strategy showed better diagnostic performance to rule in patients with non-significant stenosis compared to "either positive" strategy, as it efficiently reduces the proportion of testing false positive subjects. Copyright © 2017 Elsevier B.V. All rights reserved.
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Variation of Static-PPP Positioning Accuracy Using GPS-Single Frequency Observations (Aswan, Egypt)
NASA Astrophysics Data System (ADS)
Farah, Ashraf
2017-06-01
Precise Point Positioning (PPP) is a technique used for position computation with a high accuracy using only one GNSS receiver. It depends on highly accurate satellite position and clock data rather than broadcast ephemeries. PPP precision varies based on positioning technique (static or kinematic), observations type (single or dual frequency) and the duration of collected observations. PPP-(dual frequency receivers) offers comparable accuracy to differential GPS. PPP-single frequency receivers has many applications such as infrastructure, hydrography and precision agriculture. PPP using low cost GPS single-frequency receivers is an area of great interest for millions of users in developing countries such as Egypt. This research presents a study for the variability of single frequency static GPS-PPP precision based on different observation durations.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
Ramsey, Elijah W.; Nelson, Gene A.; Sapkota, Sijan
1998-01-01
A progressive classification of a marsh and forest system using Landsat Thematic Mapper (TM), color infrared (CIR) photograph, and ERS-1 synthetic aperture radar (SAR) data improved classification accuracy when compared to classification using solely TM reflective band data. The classification resulted in a detailed identification of differences within a nearly monotypic black needlerush marsh. Accuracy percentages of these classes were surprisingly high given the complexities of classification. The detailed classification resulted in a more accurate portrayal of the marsh transgressive sequence than was obtainable with TM data alone. Individual sensor contribution to the improved classification was compared to that using only the six reflective TM bands. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR and the green CIR band each improved overall accuracy by about 3% and 15% respectively. The SAR data improved the TM classification accuracy mostly in the marsh classes. The green CIR data also improved the marsh classification accuracy and accuracies in some water classes. The final combination of all sensor data improved almost all class accuracies from 2% to 70% with an overall improvement of about 20% over TM data alone. Not only was the identification of vegetation types improved, but the spatial detail of the classification approached 10 m in some areas.
Accuracy and Measurement Error of the Medial Clear Space of the Ankle.
Metitiri, Ogheneochuko; Ghorbanhoseini, Mohammad; Zurakowski, David; Hochman, Mary G; Nazarian, Ara; Kwon, John Y
2017-04-01
Measurement of the medial clear space (MCS) is commonly used to assess deltoid ligament competency and mortise stability when managing ankle fractures. Lacking knowledge of the true anatomic width measured, previous studies have been unable to measure accuracy of measurement. The purpose of this study was to determine MCS measurement error and accuracy and any influencing factors. Using 3 normal transtibial ankle cadaver specimens, deltoid and syndesmotic ligaments were transected and the mortise widened and affixed at a width of 6 mm (specimen 1) and 4 mm (specimen 2). The mortise was left intact in specimen 3. Radiographs were obtained of each cadaver at varying degrees of rotation. Radiographs were randomized, and providers measured the MCS using a standardized technique. Lack of accuracy as well as lack of precision in measurement of the medial clear space compared to a known anatomic value was present for all 3 specimens tested. There were no significant differences in mean delta with regard to level of training for specimens 1 and 2; however, with specimen 3, staff physicians showed increased measurement accuracy compared with trainees. Accuracy and precision of MCS measurements are poor. Provider experience did not appear to influence accuracy and precision of measurements for the displaced mortise. This high degree of measurement error and lack of precision should be considered when deciding treatment options based on MCS measurements.
Cole, Brandi; Twibill, Kristen; Lam, Patrick; Hackett, Lisa
2016-01-01
Background This cross-sectional analytic diagnostic accuracy study was designed to compare the accuracy of ultrasound performed by general sonographers in local radiology practices with ultrasound performed by an experienced musculoskeletal sonographer for the detection of rotator cuff tears. Methods In total, 238 patients undergoing arthroscopy who had previously had an ultrasound performed by both a general sonographer and a specialist musculoskeletal sonographer made up the study cohort. Accuracy of diagnosis was compared with the findings at arthroscopy. Results When analyzed as all tears versus no tears, musculoskeletal sonography had an accuracy of 97%, a sensitivity of 97% and a specificity of 95%, whereas general sonography had an accuracy of 91%, a sensitivity of 91% and a specificity of 86%. When the partial tears were split with those ≥ 50% thickness in the tear group and those < 50% thickness in the no-tear group, musculoskeletal sonography had an accuracy of 97%, a sensitivity of 97% and a specificity of 100% and general sonography had an accuracy of 85%, a sensitivity of 84% and a specificity of 87%. Conclusions Ultrasound in the hands of an experienced musculoskeletal sonographer is highly accurate for the diagnosis of rotator cuff tears. General sonography has improved subsequent to earlier studies but remains inferior to an ultrasound performed by a musculoskeletal sonographer. PMID:27660657
Bryant, Ginelle A.; Haack, Sally L.; North, Andrew M.
2013-01-01
Objective. To compare student accuracy in measuring normal and high blood pressures using a simulator arm. Methods. In this prospective, single-blind, study involving third-year pharmacy students, simulator arms were programmed with prespecified normal and high blood pressures. Students measured preset normal and high diastolic and systolic blood pressure using a crossover design. Results. One hundred sixteen students completed both blood pressure measurements. There was a significant difference between the accuracy of high systolic blood pressure (HSBP) measurement and normal systolic blood pressure (NSBP) measurement (mean HSBP difference 8.4 ± 10.9 mmHg vs NSBP 3.6 ± 6.4 mmHg; p<0.001). However, there was no difference between the accuracy of high diastolic blood pressure (HDBP) measurement and normal diastolic blood pressure (NDBP) measurement (mean HDBP difference 6.8 ± 9.6 mmHg vs. mean NDBP difference 4.6 ± 4.5 mmHg; p=0.089). Conclusions. Pharmacy students may need additional instruction and experience with taking high blood pressure measurements to ensure they are able to accurately assess this important vital sign. PMID:23788809
Bottenberg, Michelle M; Bryant, Ginelle A; Haack, Sally L; North, Andrew M
2013-06-12
To compare student accuracy in measuring normal and high blood pressures using a simulator arm. In this prospective, single-blind, study involving third-year pharmacy students, simulator arms were programmed with prespecified normal and high blood pressures. Students measured preset normal and high diastolic and systolic blood pressure using a crossover design. One hundred sixteen students completed both blood pressure measurements. There was a significant difference between the accuracy of high systolic blood pressure (HSBP) measurement and normal systolic blood pressure (NSBP) measurement (mean HSBP difference 8.4 ± 10.9 mmHg vs NSBP 3.6 ± 6.4 mmHg; p<0.001). However, there was no difference between the accuracy of high diastolic blood pressure (HDBP) measurement and normal diastolic blood pressure (NDBP) measurement (mean HDBP difference 6.8 ± 9.6 mmHg vs. mean NDBP difference 4.6 ± 4.5 mmHg; p=0.089). Pharmacy students may need additional instruction and experience with taking high blood pressure measurements to ensure they are able to accurately assess this important vital sign.
Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A.
2015-01-01
Background Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Methods Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). Results For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. Conclusions This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions. PMID:26132293
Amen, Daniel G; Raji, Cyrus A; Willeumier, Kristen; Taylor, Derek; Tarzwell, Robert; Newberg, Andrew; Henderson, Theodore A
2015-01-01
Traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) are highly heterogeneous and often present with overlapping symptomology, providing challenges in reliable classification and treatment. Single photon emission computed tomography (SPECT) may be advantageous in the diagnostic separation of these disorders when comorbid or clinically indistinct. Subjects were selected from a multisite database, where rest and on-task SPECT scans were obtained on a large group of neuropsychiatric patients. Two groups were analyzed: Group 1 with TBI (n=104), PTSD (n=104) or both (n=73) closely matched for demographics and comorbidity, compared to each other and healthy controls (N=116); Group 2 with TBI (n=7,505), PTSD (n=1,077) or both (n=1,017) compared to n=11,147 without either. ROIs and visual readings (VRs) were analyzed using a binary logistic regression model with predicted probabilities inputted into a Receiver Operating Characteristic analysis to identify sensitivity, specificity, and accuracy. One-way ANOVA identified the most diagnostically significant regions of increased perfusion in PTSD compared to TBI. Analysis included a 10-fold cross validation of the protocol in the larger community sample (Group 2). For Group 1, baseline and on-task ROIs and VRs showed a high level of accuracy in differentiating PTSD, TBI and PTSD+TBI conditions. This carefully matched group separated with 100% sensitivity, specificity and accuracy for the ROI analysis and at 89% or above for VRs. Group 2 had lower sensitivity, specificity and accuracy, but still in a clinically relevant range. Compared to subjects with TBI, PTSD showed increases in the limbic regions, cingulum, basal ganglia, insula, thalamus, prefrontal cortex and temporal lobes. This study demonstrates the ability to separate PTSD and TBI from healthy controls, from each other, and detect their co-occurrence, even in highly comorbid samples, using SPECT. This modality may offer a clinical option for aiding diagnosis and treatment of these conditions.
Hybrid simplified spherical harmonics with diffusion equation for light propagation in tissues.
Chen, Xueli; Sun, Fangfang; Yang, Defu; Ren, Shenghan; Zhang, Qian; Liang, Jimin
2015-08-21
Aiming at the limitations of the simplified spherical harmonics approximation (SPN) and diffusion equation (DE) in describing the light propagation in tissues, a hybrid simplified spherical harmonics with diffusion equation (HSDE) based diffuse light transport model is proposed. In the HSDE model, the living body is first segmented into several major organs, and then the organs are divided into high scattering tissues and other tissues. DE and SPN are employed to describe the light propagation in these two kinds of tissues respectively, which are finally coupled using the established boundary coupling condition. The HSDE model makes full use of the advantages of SPN and DE, and abandons their disadvantages, so that it can provide a perfect balance between accuracy and computation time. Using the finite element method, the HSDE is solved for light flux density map on body surface. The accuracy and efficiency of the HSDE are validated with both regular geometries and digital mouse model based simulations. Corresponding results reveal that a comparable accuracy and much less computation time are achieved compared with the SPN model as well as a much better accuracy compared with the DE one.
Shepherd, T; Teras, M; Beichel, RR; Boellaard, R; Bruynooghe, M; Dicken, V; Gooding, MJ; Julyan, PJ; Lee, JA; Lefèvre, S; Mix, M; Naranjo, V; Wu, X; Zaidi, H; Zeng, Z; Minn, H
2017-01-01
The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm. PMID:22692898
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Vertical Accuracy Evaluation of Aster GDEM2 Over a Mountainous Area Based on Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Liang, Y.; Qu, Y.; Guo, D.; Cui, T.
2018-05-01
Global digital elevation models (GDEM) provide elementary information on heights of the Earth's surface and objects on the ground. GDEMs have become an important data source for a range of applications. The vertical accuracy of a GDEM is critical for its applications. Nowadays UAVs has been widely used for large-scale surveying and mapping. Compared with traditional surveying techniques, UAV photogrammetry are more convenient and more cost-effective. UAV photogrammetry produces the DEM of the survey area with high accuracy and high spatial resolution. As a result, DEMs resulted from UAV photogrammetry can be used for a more detailed and accurate evaluation of the GDEM product. This study investigates the vertical accuracy (in terms of elevation accuracy and systematic errors) of the ASTER GDEM Version 2 dataset over a complex terrain based on UAV photogrammetry. Experimental results show that the elevation errors of ASTER GDEM2 are in normal distribution and the systematic error is quite small. The accuracy of the ASTER GDEM2 coincides well with that reported by the ASTER validation team. The accuracy in the research area is negatively correlated to both the slope of the terrain and the number of stereo observations. This study also evaluates the vertical accuracy of the up-sampled ASTER GDEM2. Experimental results show that the accuracy of the up-sampled ASTER GDEM2 data in the research area is not significantly reduced by the complexity of the terrain. The fine-grained accuracy evaluation of the ASTER GDEM2 is informative for the GDEM-supported UAV photogrammetric applications.
Frozen section pathology for decision making in parotid surgery.
Olsen, Kerry D; Moore, Eric J; Lewis, Jean E
2013-12-01
For parotid lesions, the high accuracy and utility of intraoperative frozen section (FS) pathology, compared with permanent section pathology, facilitates intraoperative decision making about the extent of surgery required. To demonstrate the accuracy and utility of FS pathology of parotid lesions as one factor in intraoperative decision making. Retrospective review of patients undergoing parotidectomy at a tertiary care center. Evaluation of the accuracy of FS pathology for parotid surgery by comparing FS pathology results with those of permanent section. Documented changes from FS to permanent section in 1339 parotidectomy pathology reports conducted from January 1, 2000, through December 31, 2009, included 693 benign and 268 primary and metastatic malignant tumors. Changes in diagnosis were found from benign to malignant (n = 11) and malignant to benign (n = 2). Sensitivity and specificity of a malignant diagnosis were 98.5% and 99.0%, respectively. Other changes were for lymphoma vs inflammation or lymphoma typing (n = 89) and for confirmation of or change in tumor type for benign (n = 36) or malignant (n = 69) tumors. No case changed from low- to high-grade malignant tumor. Only 4 cases that changed from FS to permanent section would have affected intraoperative decision making. Three patients underwent additional surgery 2 to 3 weeks later. Overall, only 1 patient was overtreated (lymphoma initially deemed carcinoma). Frozen section pathology for parotid lesions has high accuracy and utility in intraoperative decision making, facilitating timely complete procedures.
Rastogi, Amit; Early, Dayna S; Gupta, Neil; Bansal, Ajay; Singh, Vikas; Ansstas, Michael; Jonnalagadda, Sreenivasa S; Hovis, Christine E; Gaddam, Srinivas; Wani, Sachin B; Edmundowicz, Steven A; Sharma, Prateek
2011-09-01
Missing adenomas and the inability to accurately differentiate between polyp histology remain the main limitations of standard-definition white-light (SD-WL) colonoscopy. To compare the adenoma detection rates of SD-WL with those of high-definition white-light (HD-WL) and narrow-band imaging (NBI) as well as the accuracy of predicting polyp histology. Multicenter, prospective, randomized, controlled trial. Two academic medical centers in the United States. Subjects undergoing screening or surveillance colonoscopy. Subjects were randomized to undergo colonoscopy with one of the following: SD-WL, HD-WL, or NBI. The proportion of subjects detected with adenomas, adenomas detected per subject, and the accuracy of predicting polyp histology real time. A total of 630 subjects were included. The proportion of subjects with adenomas was 38.6% with SD-WL compared with 45.7% with HD-WL and 46.2% with NBI (P = .17 and P = .14, respectively). Adenomas detected per subject were 0.69 with SD-WL compared with 1.12 with HD-WL and 1.13 with NBI (P = .016 and P = .014, respectively). HD-WL and NBI detected more subjects with flat and right-sided adenomas compared with SD-WL (all P values <.005). NBI had a superior sensitivity (90%) and accuracy (82%) to predict adenomas compared with SD-WL and HD-WL (all P values <.005). Academic medical centers with experienced endoscopists. There was no difference in the proportion of subjects with adenomas detected with SD-WL, HD-WL, and NBI. However, HD-WL and NBI detected significantly more adenomas per subject (>60%) compared with SD-WL. NBI had the highest accuracy in predicting adenomas in real time during colonoscopy. ( NCT 00614770.). Copyright © 2011 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Graph-based signal integration for high-throughput phenotyping
2012-01-01
Background Electronic Health Records aggregated in Clinical Data Warehouses (CDWs) promise to revolutionize Comparative Effectiveness Research and suggest new avenues of research. However, the effectiveness of CDWs is diminished by the lack of properly labeled data. We present a novel approach that integrates knowledge from the CDW, the biomedical literature, and the Unified Medical Language System (UMLS) to perform high-throughput phenotyping. In this paper, we automatically construct a graphical knowledge model and then use it to phenotype breast cancer patients. We compare the performance of this approach to using MetaMap when labeling records. Results MetaMap's overall accuracy at identifying breast cancer patients was 51.1% (n=428); recall=85.4%, precision=26.2%, and F1=40.1%. Our unsupervised graph-based high-throughput phenotyping had accuracy of 84.1%; recall=46.3%, precision=61.2%, and F1=52.8%. Conclusions We conclude that our approach is a promising alternative for unsupervised high-throughput phenotyping. PMID:23320851
Comparison of VFA titration procedures used for monitoring the biogas process.
Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini
2014-05-01
Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of choice, due to its simplicity and accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cristache, Corina Marilena; Gurbanescu, Silviu
2017-01-01
of this study was to evaluate the accuracy of a stereolithographic template, with sleeve structure incorporated into the design, for computer-guided dental implant insertion in partially edentulous patients. Sixty-five implants were placed in twenty-five consecutive patients with a stereolithographic surgical template. After surgery, digital impression was taken and 3D inaccuracy of implants position at entry point, apex, and angle deviation was measured using an inspection tool software. Mann-Whitney U test was used to compare accuracy between maxillary and mandibular surgical guides. A p value < .05 was considered significant. Mean (and standard deviation) of 3D error at the entry point was 0.798 mm (±0.52), at the implant apex it was 1.17 mm (±0.63), and mean angular deviation was 2.34 (±0.85). A statistically significant reduced 3D error was observed at entry point p = .037, at implant apex p = .008, and also in angular deviation p = .030 in mandible when comparing to maxilla. The surgical template used has proved high accuracy for implant insertion. Within the limitations of the present study, the protocol for comparing a digital file (treatment plan) with postinsertion digital impression may be considered a useful procedure for assessing surgical template accuracy, avoiding radiation exposure, during postoperative CBCT scanning.
NASA Astrophysics Data System (ADS)
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-06-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
NASA Astrophysics Data System (ADS)
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-01-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments), to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time-symmetric suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux-limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui
2018-06-01
Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.
Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun; Yoo, Eun Sang
2012-05-01
Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI.
Diagnostic Accuracy of the Neck Tornado Test as a New Screening Test in Cervical Radiculopathy.
Park, Juyeon; Park, Woo Young; Hong, Seungbae; An, Jiwon; Koh, Jae Chul; Lee, Youn-Woo; Kim, Yong Chan; Choi, Jong Bum
2017-01-01
The Spurling test, although a highly specific provocative test of the cervical spine in cervical radiculopathy (CR), has low to moderate sensitivity. Thus, we introduced the neck tornado test (NTT) to examine the neck and the cervical spine in CR. The aim of this study was to introduce a new provocative test, the NTT, and compare the diagnostic accuracy with a widely accepted provocative test, the Spurling test. Retrospective study. Medical records of 135 subjects with neck pain (CR, n = 67; without CR, n = 68) who had undergone cervical spine magnetic resonance imaging and been referred to the pain clinic between September 2014 and August 2015 were reviewed. Both the Spurling test and NTT were performed in all patients by expert examiners. Sensitivity, specificity, and accuracy were compared for both the Spurling test and the NTT. The sensitivity of the Spurling test and the NTT was 55.22% and 85.07% ( P < 0.0001); specificity, 98.53% and 86.76% ( P = 0.0026); accuracy, 77.04% and 85.93% ( P = 0.0423), respectively. The NTT is more sensitive with superior diagnostic accuracy for CR diagnosed by magnetic resonance imaging than the Spurling test.
Matsuba, Shinji; Tabuchi, Hitoshi; Ohsugi, Hideharu; Enno, Hiroki; Ishitobi, Naofumi; Masumoto, Hiroki; Kiuchi, Yoshiaki
2018-05-09
To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.
Activity Recognition for Personal Time Management
NASA Astrophysics Data System (ADS)
Prekopcsák, Zoltán; Soha, Sugárka; Henk, Tamás; Gáspár-Papanek, Csaba
We describe an accelerometer based activity recognition system for mobile phones with a special focus on personal time management. We compare several data mining algorithms for the automatic recognition task in the case of single user and multiuser scenario, and improve accuracy with heuristics and advanced data mining methods. The results show that daily activities can be recognized with high accuracy and the integration with the RescueTime software can give good insights for personal time management.
Grazing Incidence Optics for X-rays Interferometry
NASA Technical Reports Server (NTRS)
Shipley, Ann; Zissa, David; Cash, Webster; Joy, Marshall
1999-01-01
Grazing incidence mirror parameters and constraints for x-ray interferometry are described. We present interferometer system tolerances and ray trace results used to define mirror surface accuracy requirements. Mirror material, surface figure, roughness, and geometry are evaluated based on analysis results. We also discuss mirror mount design constraints, finite element analysis, environmental issues, and solutions. Challenges associated with quantifying high accuracy mirror surface quality are addressed and test results are compared with theoretical predictions.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Discontinuous Galerkin Methods and High-Speed Turbulent Flows
NASA Astrophysics Data System (ADS)
Atak, Muhammed; Larsson, Johan; Munz, Claus-Dieter
2014-11-01
Discontinuous Galerkin methods gain increasing importance within the CFD community as they combine arbitrary high order of accuracy in complex geometries with parallel efficiency. Particularly the discontinuous Galerkin spectral element method (DGSEM) is a promising candidate for both the direct numerical simulation (DNS) and large eddy simulation (LES) of turbulent flows due to its excellent scaling attributes. In this talk, we present a DNS of a compressible turbulent boundary layer along a flat plate at a free-stream Mach number of M = 2.67 and assess the computational efficiency of the DGSEM at performing high-fidelity simulations of both transitional and turbulent boundary layers. We compare the accuracy of the results as well as the computational performance to results using a high order finite difference method.
Reliability and accuracy of Crystaleye spectrophotometric system.
Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping
2010-01-01
to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
2003-01-01
Data are not readily available on the accuracy of one of the most commonly used home blood glucose meters, the One Touch Ultra (LifeScan, Milpitas, California). The purpose of this report is to provide information on the accuracy of this home glucose meter in children with type 1 diabetes. During a 24-h clinical research center stay, the accuracy of the Ultra meter was assessed in 91 children, 3-17 years old, with type 1 diabetes by comparing the Ultra glucose values with concurrent reference serum glucose values measured in a central laboratory. The Pearson correlation between the 2,068 paired Ultra and reference values was 0.97, with the median relative absolute difference being 6%. Ninety-four percent of all Ultra values (96% of venous and 84% of capillary samples) met the proposed International Organisation for Standardisation (ISO) standard for instruments used for self-monitoring of glucose when compared with venous reference values. Ninety-nine percent of values were in zones A + B of the Modified Error Grid. A high degree of accuracy was seen across the full range of glucose values. For 353 data points during an insulin-induced hypoglycemia test, the Ultra meter was found to have accuracy that was comparable to concurrently used benchmark instruments (Beckman, YSI, or i-STAT); 95% and 96% of readings from the Ultra meter and the benchmark instruments met the proposed ISO criteria, respectively. These results confirm that the One Touch Ultra meter provides accurate glucose measurements for both hypoglycemia and hyperglycemia in children with type 1 diabetes.
Tekin, Eylul; Roediger, Henry L
2017-01-01
Researchers use a wide range of confidence scales when measuring the relationship between confidence and accuracy in reports from memory, with the highest number usually representing the greatest confidence (e.g., 4-point, 20-point, and 100-point scales). The assumption seems to be that the range of the scale has little bearing on the confidence-accuracy relationship. In two old/new recognition experiments, we directly investigated this assumption using word lists (Experiment 1) and faces (Experiment 2) by employing 4-, 5-, 20-, and 100-point scales. Using confidence-accuracy characteristic (CAC) plots, we asked whether confidence ratings would yield similar CAC plots, indicating comparability in use of the scales. For the comparisons, we divided 100-point and 20-point scales into bins of either four or five and asked, for example, whether confidence ratings of 4, 16-20, and 76-100 would yield similar values. The results show that, for both types of material, the different scales yield similar CAC plots. Notably, when subjects express high confidence, regardless of which scale they use, they are likely to be very accurate (even though they studied 100 words and 50 faces in each list in 2 experiments). The scales seem convertible from one to the other, and choice of scale range probably does not affect research into the relationship between confidence and accuracy. High confidence indicates high accuracy in recognition in the present experiments.
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Technique for Very High Order Nonlinear Simulation and Validation
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2001-01-01
Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.
NASA Technical Reports Server (NTRS)
Shyy, W.; Thakur, S.; Udaykumar, H. S.
1993-01-01
A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.
A noncontact RF-based respiratory sensor: results of a clinical trial.
Madsen, Spence; Baczuk, Jordan; Thorup, Kurt; Barton, Richard; Patwari, Neal; Langell, John T
2016-06-01
Respiratory rate (RR) is a critical vital signs monitored in health care setting. Current monitors suffer from sensor-contact failure, inaccurate data, and limited patient mobility. There is a critical need for an accurate and reliable and noncontact system to monitor RR. We developed a contact-free radio frequency (RF)-based system that measures movement using WiFi signal diffraction, which is converted into interpretable data using a Fourier transform. Here, we investigate the system's ability to measure fine movements associated with human respiration. Testing was conducted on subjects using visual cue, fixed-tempo instruction to breath at standard RRs. Blinded instruction-based RRs were compared to RF-acquired data to determine measurement accuracy. The RF-based technology was studied on postoperative ventilator-dependent patients. Blinded ventilator capnographic RR data were collected for each patient and compared to RF-acquired data to determine measurement accuracy. Respiratory rate data collected from 10 subjects breathing at a fixed RR (14, 16, 18, or 20) demonstrated 95.5% measurement accuracy between the patient's actual rate and that measured by our RF technology. Ten patients were enrolled into the clinical trial. Blinded ventilator capnographic RR data were compared to RF-based acquired data. The RF-based data showed 88.8% measurement accuracy with ventilator capnography. Initial clinical pilot trials with our contact-free RF-based monitoring system demonstrate a high degree of RR measurement accuracy when compared to capnographic data. Based on these results, we believe RF-based systems present a promising noninvasive, inexpensive, and accurate tool for continuous RR monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
A Neuro-Fuzzy Approach in the Classification of Students' Academic Performance
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions. PMID:24302928
A neuro-fuzzy approach in the classification of students' academic performance.
Do, Quang Hung; Chen, Jeng-Fung
2013-01-01
Classifying the student academic performance with high accuracy facilitates admission decisions and enhances educational services at educational institutions. The purpose of this paper is to present a neuro-fuzzy approach for classifying students into different groups. The neuro-fuzzy classifier used previous exam results and other related factors as input variables and labeled students based on their expected academic performance. The results showed that the proposed approach achieved a high accuracy. The results were also compared with those obtained from other well-known classification approaches, including support vector machine, Naive Bayes, neural network, and decision tree approaches. The comparative analysis indicated that the neuro-fuzzy approach performed better than the others. It is expected that this work may be used to support student admission procedures and to strengthen the services of educational institutions.
NASA Astrophysics Data System (ADS)
Matongera, Trylee Nyasha; Mutanga, Onisimo; Dube, Timothy; Sibanda, Mbulisi
2017-05-01
Bracken fern is an invasive plant that presents serious environmental, ecological and economic problems around the world. An understanding of the spatial distribution of bracken fern weeds is therefore essential for providing appropriate management strategies at both local and regional scales. The aim of this study was to assess the utility of the freely available medium resolution Landsat 8 OLI sensor in the detection and mapping of bracken fern at the Cathedral Peak, South Africa. To achieve this objective, the results obtained from Landsat 8 OLI were compared with those derived using the costly, high spatial resolution WorldView-2 imagery. Since previous studies have already successfully mapped bracken fern using high spatial resolution WorldView-2 image, the comparison was done to investigate the magnitude of difference in accuracy between the two sensors in relation to their acquisition costs. To evaluate the performance of Landsat 8 OLI in discriminating bracken fern compared to that of Worldview-2, we tested the utility of (i) spectral bands; (ii) derived vegetation indices as well as (iii) the combination of spectral bands and vegetation indices based on discriminant analysis classification algorithm. After resampling the training and testing data and reclassifying several times (n = 100) based on the combined data sets, the overall accuracies for both Landsat 8 and WorldView-2 were tested for significant differences based on Mann-Whitney U test. The results showed that the integration of the spectral bands and derived vegetation indices yielded the best overall classification accuracy (80.08% and 87.80% for Landsat 8 OLI and WorldView-2 respectively). Additionally, the use of derived vegetation indices as a standalone data set produced the weakest overall accuracy results of 62.14% and 82.11% for both the Landsat 8 OLI and WorldView-2 images. There were significant differences {U (100) = 569.5, z = -10.8242, p < 0.01} between the classification accuracies derived based on Landsat OLI 8 and those derived using WorldView-2 sensor. Although there were significant differences between Landsat and WorldView-2 accuracies, the magnitude of variation (9%) between the two sensors was within an acceptable range. Therefore, the findings of this study demonstrated that the recently launched Landsat 8 OLI multispectral sensor provides valuable information that could aid in the long term continuous monitoring and formulation of effective bracken fern management with acceptable accuracies that are comparable to those obtained from the high resolution WorldView-2 commercial sensor.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
DJ-1 is a reliable serum biomarker for discriminating high-risk endometrial cancer.
Di Cello, Annalisa; Di Sanzo, Maddalena; Perrone, Francesca Marta; Santamaria, Gianluca; Rania, Erika; Angotti, Elvira; Venturella, Roberta; Mancuso, Serafina; Zullo, Fulvio; Cuda, Giovanni; Costanzo, Francesco
2017-06-01
New reliable approaches to stratify patients with endometrial cancer into risk categories are highly needed. We have recently demonstrated that DJ-1 is overexpressed in endometrial cancer, showing significantly higher levels both in serum and tissue of patients with high-risk endometrial cancer compared with low-risk endometrial cancer. In this experimental study, we further extended our observation, evaluating the role of DJ-1 as an accurate serum biomarker for high-risk endometrial cancer. A total of 101 endometrial cancer patients and 44 healthy subjects were prospectively recruited. DJ-1 serum levels were evaluated comparing cases and controls and, among endometrial cancer patients, between high- and low-risk patients. The results demonstrate that DJ-1 levels are significantly higher in cases versus controls and in high- versus low-risk patients. The receiver operating characteristic curve analysis shows that DJ-1 has a very good diagnostic accuracy in discriminating endometrial cancer patients versus controls and an excellent accuracy in distinguishing, among endometrial cancer patients, low- from high-risk cases. DJ-1 sensitivity and specificity are the highest when high- and low-risk patients are compared, reaching the value of 95% and 99%, respectively. Moreover, DJ-1 serum levels seem to be correlated with worsening of the endometrial cancer grade and histotype, making it a reliable tool in the preoperative decision-making process.
Very high resolution aerial films
NASA Astrophysics Data System (ADS)
Becker, Rolf
1986-11-01
The use of very high resolution aerial films in aerial photography is evaluated. Commonly used panchromatic, color, and CIR films and their high resolution equivalents are compared. Based on practical experience and systematic investigations, the very high image quality and improved height accuracy that can be achieved using these films are demonstrated. Advantages to be gained from this improvement and operational restrictions encountered when using high resolution film are discussed.
Mind the gap: Increased inter-letter spacing as a means of improving reading performance.
Dotan, Shahar; Katzir, Tami
2018-06-05
Theeffects of text display, specificallywithin-word spacing, on children's reading at different developmental levels has barely been investigated.This study explored the influence of manipulating inter-letter spacing on reading performance (accuracy and rate) of beginner Hebrew readers compared with older readers and of low-achieving readers compared with age-matched high-achieving readers.A computer-based isolated word reading task was performed by 132 first and third graders. Words were displayed under two spacing conditions: standard spacing (100%) and increased spacing (150%). Words were balanced for length and frequency across conditions. Results indicated that increased spacing contributed to reading accuracy without affecting reading rate. Interestingly, all first graders benefitted fromthe spaced condition. Thiseffect was found only in long words but not in short words. Among third graders, only low-achieving readers gained in accuracy fromthespaced condition. Thetheoretical and clinical effects ofthefindings are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
ERIC Educational Resources Information Center
Martin, R. C.; McLaughlin, T. F.
1981-01-01
When the effectiveness of free time and daily report card systems on assignment completion and accuracy of four junior high school special education students were compared, results indicated that both procedures improved students' performance. (Author)
Highly accurate calculation of rotating neutron stars
NASA Astrophysics Data System (ADS)
Ansorg, M.; Kleinwächter, A.; Meinel, R.
2002-01-01
A new spectral code for constructing general-relativistic models of rapidly rotating stars with an unprecedented accuracy is presented. As a first application, we reexamine uniformly rotating homogeneous stars and compare our results with those obtained by several previous codes. Moreover, representative relativistic examples corresponding to highly flattened rotating bodies are given.
NASA Astrophysics Data System (ADS)
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-03-01
Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).
Gjerde, Hallvard; Verstraete, Alain
2010-02-25
To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.
A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations
NASA Technical Reports Server (NTRS)
Dydson, Roger W.; Goodrich, John W.
2000-01-01
Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Bradley, David; Nisbet, Andrew
2012-01-01
This study provides a review of recent publications on the physics-aspects of dosimetric accuracy in high dose rate (HDR) brachytherapy. The discussion of accuracy is primarily concerned with uncertainties, but methods to improve dose conformation to the prescribed intended dose distribution are also noted. The main aim of the paper is to review current practical techniques and methods employed for HDR brachytherapy dosimetry. This includes work on the determination of dose rate fields around brachytherapy sources, the capability of treatment planning systems, the performance of treatment units and methods to verify dose delivery. This work highlights the determinants of accuracy in HDR dosimetry and treatment delivery and presents a selection of papers, focusing on articles from the last five years, to reflect active areas of research and development. Apart from Monte Carlo modelling of source dosimetry, there is no clear consensus on the optimum techniques to be used to assure dosimetric accuracy through all the processes involved in HDR brachytherapy treatment. With the exception of the ESTRO mailed dosimetry service, there is little dosimetric audit activity reported in the literature, when compared with external beam radiotherapy verification. PMID:23349649
Accuracy Analysis of a Dam Model from Drone Surveys
Buffi, Giulia; Venturi, Sara
2017-01-01
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185
Accuracy Analysis of a Dam Model from Drone Surveys.
Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio
2017-08-03
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.
Palmer, Antony; Bradley, David; Nisbet, Andrew
2012-06-01
This study provides a review of recent publications on the physics-aspects of dosimetric accuracy in high dose rate (HDR) brachytherapy. The discussion of accuracy is primarily concerned with uncertainties, but methods to improve dose conformation to the prescribed intended dose distribution are also noted. The main aim of the paper is to review current practical techniques and methods employed for HDR brachytherapy dosimetry. This includes work on the determination of dose rate fields around brachytherapy sources, the capability of treatment planning systems, the performance of treatment units and methods to verify dose delivery. This work highlights the determinants of accuracy in HDR dosimetry and treatment delivery and presents a selection of papers, focusing on articles from the last five years, to reflect active areas of research and development. Apart from Monte Carlo modelling of source dosimetry, there is no clear consensus on the optimum techniques to be used to assure dosimetric accuracy through all the processes involved in HDR brachytherapy treatment. With the exception of the ESTRO mailed dosimetry service, there is little dosimetric audit activity reported in the literature, when compared with external beam radiotherapy verification.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
Fekete, Szabolcs; Fekete, Jeno; Molnár, Imre; Ganzler, Katalin
2009-11-06
Many different strategies of reversed phase high performance liquid chromatographic (RP-HPLC) method development are used today. This paper describes a strategy for the systematic development of ultrahigh-pressure liquid chromatographic (UHPLC or UPLC) methods using 5cmx2.1mm columns packed with sub-2microm particles and computer simulation (DryLab((R)) package). Data for the accuracy of computer modeling in the Design Space under ultrahigh-pressure conditions are reported. An acceptable accuracy for these predictions of the computer models is presented. This work illustrates a method development strategy, focusing on time reduction up to a factor 3-5, compared to the conventional HPLC method development and exhibits parts of the Design Space elaboration as requested by the FDA and ICH Q8R1. Furthermore this paper demonstrates the accuracy of retention time prediction at elevated pressure (enhanced flow-rate) and shows that the computer-assisted simulation can be applied with sufficient precision for UHPLC applications (p>400bar). Examples of fast and effective method development in pharmaceutical analysis, both for gradient and isocratic separations are presented.
Inertia Compensation While Scanning Screw Threads on Coordinate Measuring Machines
NASA Astrophysics Data System (ADS)
Kosarevsky, Sergey; Latypov, Viktor
2010-01-01
Usage of scanning coordinate-measuring machines for inspection of screw threads has become a common practice nowadays. Compared to touch trigger probing, scanning capabilities allow to speed up the measuring process while still maintaining high accuracy. However, in some cases accuracy drastically depends on the scanning speed. In this paper a compensation method is proposed allowing to reduce the influence of inertia of the probing system while scanning screw threads on coordinate-measuring machines.
DeWitt, Jessica D.; Warner, Timothy A.; Chirico, Peter G.; Bergstresser, Sarah E.
2017-01-01
For areas of the world that do not have access to lidar, fine-scale digital elevation models (DEMs) can be photogrammetrically created using globally available high-spatial resolution stereo satellite imagery. The resultant DEM is best termed a digital surface model (DSM) because it includes heights of surface features. In densely vegetated conditions, this inclusion can limit its usefulness in applications requiring a bare-earth DEM. This study explores the use of techniques designed for filtering lidar point clouds to mitigate the elevation artifacts caused by above ground features, within the context of a case study of Prince William Forest Park, Virginia, USA. The influences of land cover and leaf-on vs. leaf-off conditions are investigated, and the accuracy of the raw photogrammetric DSM extracted from leaf-on imagery was between that of a lidar bare-earth DEM and the Shuttle Radar Topography Mission DEM. Although the filtered leaf-on photogrammetric DEM retains some artifacts of the vegetation canopy and may not be useful for some applications, filtering procedures significantly improved the accuracy of the modeled terrain. The accuracy of the DSM extracted in leaf-off conditions was comparable in most areas to the lidar bare-earth DEM and filtering procedures resulted in accuracy comparable of that to the lidar DEM.
Raidullah, Ebadullah; Francis, Maria L.
2014-01-01
Objectives: This study aimed to evaluate the accuracy of Root ZX in determining working length in presence of normal saline, 0.2% chlorhexidine and 2.5% of sodium hypochlorite. Material and Methods: Sixty extracted, single rooted, single canal human teeth were used. Teeth were decoronated at CEJ and actual canal length determined. Then working length measurements were obtained with Root ZX in presence of normal saline 0.9%, 0.2% chlorhexidine and 2.5% NaOCl. The working length obtained with Root ZX were compared with actual canal length and subjected to statistical analysis. Results: No statistical significant difference was found between actual canal length and Root ZX measurements in presence of normal saline and 0.2% chlorhexidine. Highly statistical difference was found between actual canal length and Root ZX measurements in presence of 2.5% of NaOCl, however all the measurements were within the clinically acceptable range of ±0.5mm. Conclusion: The accuracy of EL measurement of Root ZX within±0.5 mm of AL was consistently high in the presence of 0.2% chlorhexidine, normal saline and 2.5% sodium hypochlorite. Clinical significance: This study signifies the efficacy of ROOT ZX (Third generation apex locator) as a dependable aid in endodontic working length. Key words:Electronic apex locator, working length, root ZX accuracy, intracanal irrigating solutions. PMID:24596634
Application of magnetic resonance imaging in diagnosis of Uterus Cervical Carcinoma.
Peng, Jidong; Wang, Weiqiang; Zeng, Daohui
2017-01-01
Effective treatment of Uterus Cervical Carcinoma (UCC) rely heavily on the precise pre-surgical staging. The conventional International Federation of Gynecology and Obstetrics (FIGO) system based on clinical examination is being applied worldwide for UCC staging. Yet its performance just appears passable. Thus, this study aims to investigate the value of applying Magnetic Resonance Imaging (MRI) with clinical examination in staging of UCC. A retrospective dataset involving 164 patients diagnosed with UCC was enrolled in this study. The mean age of this study population was 46.1 years (range, 28-#x2013;75 years). All patients underwent operations and UCC types were confirmed by pathological examinations. The tumor stages were determined by two experienced Gynecologist independently based on FIGO examinations and MRI. The diagnostic results were also compared with the post-operative pathologic reports. Statistical data analysis on diagnostic performance was then done and reported. The study results showed that the overall accuracy of applying MRI in UCC staging was 82.32%, while using FIGO staging method, the staging accuracy was 59.15%. MRI is suitable to evaluate tumor extent with high accuracy, and it can offer more objective information for the diagnosis and staging of UCC. Compared with clinical examinations based on FIGO, MRI illustrated relatively high accuracy in evaluating UCC staging, and is worthwhile to be recommended in future clinical practice.
Austin, Peter C; Lee, Douglas S
2011-01-01
Purpose: Classification trees are increasingly being used to classifying patients according to the presence or absence of a disease or health outcome. A limitation of classification trees is their limited predictive accuracy. In the data-mining and machine learning literature, boosting has been developed to improve classification. Boosting with classification trees iteratively grows classification trees in a sequence of reweighted datasets. In a given iteration, subjects that were misclassified in the previous iteration are weighted more highly than subjects that were correctly classified. Classifications from each of the classification trees in the sequence are combined through a weighted majority vote to produce a final classification. The authors' objective was to examine whether boosting improved the accuracy of classification trees for predicting outcomes in cardiovascular patients. Methods: We examined the utility of boosting classification trees for classifying 30-day mortality outcomes in patients hospitalized with either acute myocardial infarction or congestive heart failure. Results: Improvements in the misclassification rate using boosted classification trees were at best minor compared to when conventional classification trees were used. Minor to modest improvements to sensitivity were observed, with only a negligible reduction in specificity. For predicting cardiovascular mortality, boosted classification trees had high specificity, but low sensitivity. Conclusions: Gains in predictive accuracy for predicting cardiovascular outcomes were less impressive than gains in performance observed in the data mining literature. PMID:22254181
Murphy, S F; Lenihan, L; Orefuwa, F; Colohan, G; Hynes, I; Collins, C G
2017-05-01
The discharge letter is a key component of the communication pathway between the hospital and primary care. Accuracy and timeliness of delivery are crucial to ensure continuity of patient care. Electronic discharge summaries (EDS) and prescriptions have been shown to improve quality of discharge information for general practitioners (GPs). The aim of this study was to evaluate the effect of a new EDS on GP satisfaction levels and accuracy of discharge diagnosis. A GP survey was carried out whereby semi-structured interviews were conducted with 13 GPs from three primary care centres who receive a high volume of discharge letters from the hospital. A chart review was carried out on 90 charts to compare accuracy of ICD-10 coding of Non-Consultant Hospital Doctors (NCHDs) with that of trained Hopital In-Patient Enquiry (HIPE) coders. GP satisfaction levels were over 90 % with most aspects of the EDS, including amount of information (97 %), accuracy (95 %), GP information and follow-up (97 %) and medications (91 %). 70 % of GPs received the EDS within 2 weeks. ICD-10 coding of discharge diagnosis by NCHDs had an accuracy of 33 %, compared with 95.6 % when done by trained coders (p < 0.00001). The introduction of the EDS and prescription has led to improved quality of timeliness of communication with primary care. It has led to a very high satisfaction rating with GPs. ICD-10 coding was found to be grossly inaccurate when carried out by NCHDs and it is more appropriate for this task to be carried out by trained coders.
Real-time teleophthalmology versus face-to-face consultation: A systematic review.
Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W
2017-08-01
Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.
Coincidence-anticipation timing requirements are different in racket sports.
Akpinar, Selçuk; Devrilmez, Erhan; Kirazci, Sadettin
2012-10-01
The aim of this study was to compare the coincidence-anticipation timing accuracy of athletes of different racket sports with various stimulus velocity requirements. Ninety players (15 girls, 15 boys for each sport) from tennis (M age = 12.4 yr., SD = 1.4), badminton (M age = 12.5 yr., SD = 1.4), and table tennis (M age = 12.4 yr., SD = 1.2) participated in this study. Three different stimulus velocities, low, moderate, and high, were used to simulate the velocity requirements of these racket sports. Tennis players had higher accuracy when they performed under the low stimulus velocity compared to badminton and table tennis players. Badminton players performed better under the moderate speed comparing to tennis and table tennis players. Table tennis players had better performance than tennis and badminton players under the high stimulus velocity. Therefore, visual and motor systems of players from different racket sports may adapt to a stimulus velocity in coincidence-anticipation timing, which is specific to each type of racket sports.
Characterization and delineation of caribou habitat on Unimak Island using remote sensing techniques
NASA Astrophysics Data System (ADS)
Atkinson, Brain M.
The assessment of herbivore habitat quality is traditionally based on quantifying the forages available to the animal across their home range through ground-based techniques. While these methods are highly accurate, they can be time-consuming and highly expensive, especially for herbivores that occupy vast spatial landscapes. The Unimak Island caribou herd has been decreasing in the last decade at rates that have prompted discussion of management intervention. Frequent inclement weather in this region of Alaska has provided for little opportunity to study the caribou forage habitat on Unimak Island. The overall objectives of this study were two-fold 1) to assess the feasibility of using high-resolution color and near-infrared aerial imagery to map the forage distribution of caribou habitat on Unimak Island and 2) to assess the use of a new high-resolution multispectral satellite imagery platform, RapidEye, and use of the "red-edge" spectral band on vegetation classification accuracy. Maximum likelihood classification algorithms were used to create land cover maps in aerial and satellite imagery. Accuracy assessments and transformed divergence values were produced to assess vegetative spectral information and classification accuracy. By using RapidEye and aerial digital imagery in a hierarchical supervised classification technique, we were able to produce a high resolution land cover map of Unimak Island. We obtained overall accuracy rates of 71.4 percent which are comparable to other land cover maps using RapidEye imagery. The "red-edge" spectral band included in the RapidEye imagery provides additional spectral information that allows for a more accurate overall classification, raising overall accuracy 5.2 percent.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Schlegel, Claudia; Bonvin, Raphael; Rethans, Jan Joost; van der Vleuten, Cees
2014-10-14
Abstract Introduction: High-stake objective structured clinical examinations (OSCEs) with standardized patients (SPs) should offer the same conditions to all candidates throughout the exam. SP performance should therefore be as close to the original role script as possible during all encounters. In this study, we examined the impact of video in SP training on SPs' role accuracy, investigating how the use of different types of video during SP training improves the accuracy of SP portrayal. Methods: In a randomized post-test, control group design three groups of 12 SPs each with different types of video training and one control group of 12 SPs without video use in SP training were compared. The three intervention groups used role-modeling video, performance-feedback video, or a combination of both. Each SP from each group had four students encounter. Two blinded faculty members rated the 192 video-recorded encounters, using a case-specific rating instrument to assess SPs' role accuracy. Results: SPs trained by video showed significantly (p < 0.001) better role accuracy than SPs trained without video over the four sequential portrayals. There was no difference between the three types of video training. Discussion: Use of video during SP training enhances the accuracy of SP portrayal compared with no video, regardless of the type of video intervention used.
Wu, C; de Jong, J R; Gratama van Andel, H A; van der Have, F; Vastenhouw, B; Laverman, P; Boerman, O C; Dierckx, R A J O; Beekman, F J
2011-09-21
Attenuation of photon flux on trajectories between the source and pinhole apertures affects the quantitative accuracy of reconstructed single-photon emission computed tomography (SPECT) images. We propose a Chang-based non-uniform attenuation correction (NUA-CT) for small-animal SPECT/CT with focusing pinhole collimation, and compare the quantitative accuracy with uniform Chang correction based on (i) body outlines extracted from x-ray CT (UA-CT) and (ii) on hand drawn body contours on the images obtained with three integrated optical cameras (UA-BC). Measurements in phantoms and rats containing known activities of isotopes were conducted for evaluation. In (125)I, (201)Tl, (99m)Tc and (111)In phantom experiments, average relative errors comparing to the gold standards measured in a dose calibrator were reduced to 5.5%, 6.8%, 4.9% and 2.8%, respectively, with NUA-CT. In animal studies, these errors were 2.1%, 3.3%, 2.0% and 2.0%, respectively. Differences in accuracy on average between results of NUA-CT, UA-CT and UA-BC were less than 2.3% in phantom studies and 3.1% in animal studies except for (125)I (3.6% and 5.1%, respectively). All methods tested provide reasonable attenuation correction and result in high quantitative accuracy. NUA-CT shows superior accuracy except for (125)I, where other factors may have more impact on the quantitative accuracy than the selected attenuation correction.
A Comparative Study of Teaching Typing Skills on Microcomputers.
ERIC Educational Resources Information Center
Lindsay, Robert M.
A 4-week experimental study was conducted with 105 high school students in 4 introductory typewriting classes of a large urban school in British Columbia during the 1981 spring semester. The purpose of the study was to compare the effectiveness of teaching the skill-building components of typewriting speed and accuracy using either the…
Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating
NASA Astrophysics Data System (ADS)
Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed
2016-11-01
A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-01-01
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument. PMID:29621142
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-04-05
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument.
Park, H J; Lee, S Y; Kim, M S; Choi, S H; Chung, E C; Kook, S H; Kim, E
2015-03-01
To evaluate the diagnostic accuracy of three-dimensional (3D) enhanced T1 high-resolution isotropic volume excitation (eTHRIVE) shoulder MR for the detection of rotator cuff tears, labral lesions and calcific tendonitis of the rotator cuff in comparison with two-dimensional (2D) fast spin echo T2 fat saturation (FS) MR. This retrospective study included 73 patients who underwent shoulder MRI using the eTHRIVE technique. Shoulder MR images were interpreted separately by two radiologists. They evaluated anatomic identification and image quality of the shoulder joint on routine MRI sequences (axial and oblique coronal T2 FS images) and compared them with the reformatted eTHRIVE images. The images were scored on a four-point scale (0, poor; 1, questionable; 2, adequate; 3, excellent) according to the degree of homogeneous and sufficient fat saturation to penetrate bone and soft tissue, visualization of the glenoid labrum and distinction of the supraspinatus tendon (SST). The diagnostic accuracy of eTHRIVE images compared with routine MRI sequences was evaluated in the setting of rotator cuff tears, glenoid labral injuries and calcific tendonitis of the SST. Fat saturation scores for eTHRIVE were significantly higher than those of the T2 FS for both radiologists. The sensitivity and accuracy of the T2 FS in diagnosing rotor cuff tears were >90%, whereas sensitivity and accuracy of the eTHRIVE method were significantly lower. The sensitivity, specificity and accuracy of both images in diagnosing labral injuries and calcific tendonitis were similar and showed no significant differences. The specificity of both images for the diagnosis of labral injuries and calcific tendonitis was higher than the sensitivities. The accuracy of 3D eTHRIVE imaging was comparable to that of 2D FSE T2 FS for the diagnosis of glenoid labral injury and calcific tendonitis of SST. The 3D eTHRIVE technique was superior to 2D FSE T2 FS in terms of fat saturation. Overall, 3D eTHRIVE was inferior to T2 FS in the evaluation of rotator cuff tears because of poor contrast between joint fluid and tendons. The accuracy of 3D eTHRIVE imaging is comparable to that of 2D FSE T2 FS for the diagnosis of glenoid labral injury and calcific tendonitis of SST.
NASA Astrophysics Data System (ADS)
Snavely, Rachel A.
Focusing on the semi-arid and highly disturbed landscape of San Clemente Island, California, this research tests the effectiveness of incorporating a hierarchal object-based image analysis (OBIA) approach with high-spatial resolution imagery and light detection and range (LiDAR) derived canopy height surfaces for mapping vegetation communities. The study is part of a large-scale research effort conducted by researchers at San Diego State University's (SDSU) Center for Earth Systems Analysis Research (CESAR) and Soil Ecology and Restoration Group (SERG), to develop an updated vegetation community map which will support both conservation and management decisions on Naval Auxiliary Landing Field (NALF) San Clemente Island. Trimble's eCognition Developer software was used to develop and generate vegetation community maps for two study sites, with and without vegetation height data as input. Overall and class-specific accuracies were calculated and compared across the two classifications. The highest overall accuracy (approximately 80%) was observed with the classification integrating airborne visible and near infrared imagery having very high spatial resolution with a LiDAR derived canopy height model. Accuracies for individual vegetation classes differed between both classification methods, but were highest when incorporating the LiDAR digital surface data. The addition of a canopy height model, however, yielded little difference in classification accuracies for areas of very dense shrub cover. Overall, the results show the utility of the OBIA approach for mapping vegetation with high spatial resolution imagery, and emphasizes the advantage of both multi-scale analysis and digital surface data for accuracy characterizing highly disturbed landscapes. The integrated imagery and digital canopy height model approach presented both advantages and limitations, which have to be considered prior to its operational use in mapping vegetation communities.
2009-01-01
Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835
Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment
NASA Astrophysics Data System (ADS)
Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil
2016-05-01
Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.
A discontinuous Galerkin method for poroelastic wave propagation: The two-dimensional case
NASA Astrophysics Data System (ADS)
Dudley Ward, N. F.; Lähivaara, T.; Eveson, S.
2017-12-01
In this paper, we consider a high-order discontinuous Galerkin (DG) method for modelling wave propagation in coupled poroelastic-elastic media. The upwind numerical flux is derived as an exact solution for the Riemann problem including the poroelastic-elastic interface. Attenuation mechanisms in both Biot's low- and high-frequency regimes are considered. The current implementation supports non-uniform basis orders which can be used to control the numerical accuracy element by element. In the numerical examples, we study the convergence properties of the proposed DG scheme and provide experiments where the numerical accuracy of the scheme under consideration is compared to analytic and other numerical solutions.
Constructing better classifier ensemble based on weighted accuracy and diversity measure.
Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.
Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure
Chao, Lidia S.
2014-01-01
A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402
Camera system considerations for geomorphic applications of SfM photogrammetry
Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John
2017-01-01
The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.
Nikolac Gabaj, Nora; Miler, Marijana; Vrtarić, Alen; Hemar, Marina; Filipi, Petra; Kocijančić, Marija; Šupak Smolčić, Vesna; Ćelap, Ivana; Šimundić, Ana-Maria
2018-04-25
The aim of our study was to perform verification of serum indices on three clinical chemistry platforms. This study was done on three analyzers: Abbott Architect c8000, Beckman Coulter AU5800 (BC) and Roche Cobas 6000 c501. The following analytical specifications were verified: precision (two patient samples), accuracy (sample with the highest concentration of interferent was serially diluted and measured values compared to theoretical values), comparability (120 patients samples) and cross reactivity (samples with increasing concentrations of interferent were divided in two aliquots and remaining interferents were added in each aliquot. Measurements were done before and after adding interferents). Best results for precision were obtained for the H index (0.72%-2.08%). Accuracy for the H index was acceptable for Cobas and BC, while on Architect, deviations in the high concentration range were observed (y=0.02 [0.01-0.07]+1.07 [1.06-1.08]x). All three analyzers showed acceptable results in evaluating accuracy of L index and unacceptable results for I index. The H index was comparable between BC and both, Architect (Cohen's κ [95% CI]=0.795 [0.692-0.898]) and Roche (Cohen's κ [95% CI]=0.825 [0.729-0.922]), while Roche and Architect were not comparable. The I index was not comparable between all analyzer combinations, while the L index was only comparable between Abbott and BC. Cross reactivity analysis mostly showed that serum indices measurement is affected when a combination of interferences is present. There is heterogeneity between analyzers in the hemolysis, icteria, lipemia (HIL) quality performance. Verification of serum indices in routine work is necessary to establish analytical specifications.
Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling
NASA Technical Reports Server (NTRS)
Kenward, T.; Lettenmaier, D. P.
1997-01-01
The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.
Motsa, S. S.; Magagula, V. M.; Sibanda, P.
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature. PMID:25254252
Motsa, S S; Magagula, V M; Sibanda, P
2014-01-01
This paper presents a new method for solving higher order nonlinear evolution partial differential equations (NPDEs). The method combines quasilinearisation, the Chebyshev spectral collocation method, and bivariate Lagrange interpolation. In this paper, we use the method to solve several nonlinear evolution equations, such as the modified KdV-Burgers equation, highly nonlinear modified KdV equation, Fisher's equation, Burgers-Fisher equation, Burgers-Huxley equation, and the Fitzhugh-Nagumo equation. The results are compared with known exact analytical solutions from literature to confirm accuracy, convergence, and effectiveness of the method. There is congruence between the numerical results and the exact solutions to a high order of accuracy. Tables were generated to present the order of accuracy of the method; convergence graphs to verify convergence of the method and error graphs are presented to show the excellent agreement between the results from this study and the known results from literature.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.
2009-04-01
A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
Perceptual impairment in face identification with poor sleep
Beattie, Louise; Walsh, Darragh; McLaren, Jessica; Biello, Stephany M.
2016-01-01
Previous studies have shown impaired memory for faces following restricted sleep. However, it is not known whether lack of sleep impairs performance on face identification tasks that do not rely on recognition memory, despite these tasks being more prevalent in security and forensic professions—for example, in photo-ID checks at national borders. Here we tested whether poor sleep affects accuracy on a standard test of face-matching ability that does not place demands on memory: the Glasgow Face-Matching Task (GFMT). In Experiment 1, participants who reported sleep disturbance consistent with insomnia disorder show impaired accuracy on the GFMT when compared with participants reporting normal sleep behaviour. In Experiment 2, we then used a sleep diary method to compare GFMT accuracy in a control group to participants reporting poor sleep on three consecutive nights—and again found lower accuracy scores in the short sleep group. In both experiments, reduced face-matching accuracy in those with poorer sleep was not associated with lower confidence in their decisions, carrying implications for occupational settings where identification errors made with high confidence can have serious outcomes. These results suggest that sleep-related impairments in face memory reflect difficulties in perceptual encoding of identity, and point towards metacognitive impairment in face matching following poor sleep. PMID:27853547
Estis, Julie M; Dean-Claytor, Ashli; Moore, Robert E; Rowell, Thomas L
2011-03-01
The effects of musical interference and noise on pitch-matching accuracy were examined. Vocal training was explored as a factor influencing pitch-matching accuracy, and the relationship between pitch matching and pitch discrimination was examined. Twenty trained singers (TS) and 20 untrained individuals (UT) vocally matched tones in six conditions (immediate, four types of chords, noise). Fundamental frequencies were calculated, compared with the frequency of the target tone, and converted to semitone difference scores. A pitch discrimination task was also completed. TS showed significantly better pitch matching than UT across all conditions. Individual performances for UT were highly variable. Therefore, untrained participants were divided into two groups: 10 untrained accurate and 10 untrained inaccurate. Comparison of TS with untrained accurate individuals revealed significant differences between groups and across conditions. Compared with immediate vocal matching of target tones, pitch-matching accuracy was significantly reduced, given musical chord and noise interference unless the target tone was presented in the musical chord. A direct relationship between pitch matching and pitch discrimination was revealed. Across pitch-matching conditions, TS were consistently more accurate than UT. Pitch-matching accuracy diminished when auditory interference consisted of chords that did not contain the target tone and noise. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Diagnostic Accuracy of the Neck Tornado Test as a New Screening Test in Cervical Radiculopathy
Park, Juyeon; Park, Woo Young; Hong, Seungbae; An, Jiwon; Koh, Jae Chul; Lee, Youn-Woo; Kim, Yong Chan; Choi, Jong Bum
2017-01-01
Background: The Spurling test, although a highly specific provocative test of the cervical spine in cervical radiculopathy (CR), has low to moderate sensitivity. Thus, we introduced the neck tornado test (NTT) to examine the neck and the cervical spine in CR. Objectives: The aim of this study was to introduce a new provocative test, the NTT, and compare the diagnostic accuracy with a widely accepted provocative test, the Spurling test. Design: Retrospective study. Methods: Medical records of 135 subjects with neck pain (CR, n = 67; without CR, n = 68) who had undergone cervical spine magnetic resonance imaging and been referred to the pain clinic between September 2014 and August 2015 were reviewed. Both the Spurling test and NTT were performed in all patients by expert examiners. Sensitivity, specificity, and accuracy were compared for both the Spurling test and the NTT. Results: The sensitivity of the Spurling test and the NTT was 55.22% and 85.07% (P < 0.0001); specificity, 98.53% and 86.76% (P = 0.0026); accuracy, 77.04% and 85.93% (P = 0.0423), respectively. Conclusions: The NTT is more sensitive with superior diagnostic accuracy for CR diagnosed by magnetic resonance imaging than the Spurling test. PMID:28824298
Kim, Bum Soo; Kim, Tae-Hwan; Kwon, Tae Gyun
2012-01-01
Purpose Several studies have demonstrated the superiority of endorectal coil magnetic resonance imaging (MRI) over pelvic phased-array coil MRI at 1.5 Tesla for local staging of prostate cancer. However, few have studied which evaluation is more accurate at 3 Tesla MRI. In this study, we compared the accuracy of local staging of prostate cancer using pelvic phased-array coil or endorectal coil MRI at 3 Tesla. Materials and Methods Between January 2005 and May 2010, 151 patients underwent radical prostatectomy. All patients were evaluated with either pelvic phased-array coil or endorectal coil prostate MRI prior to surgery (63 endorectal coils and 88 pelvic phased-array coils). Tumor stage based on MRI was compared with pathologic stage. We calculated the specificity, sensitivity and accuracy of each group in the evaluation of extracapsular extension and seminal vesicle invasion. Results Both endorectal coil and pelvic phased-array coil MRI achieved high specificity, low sensitivity and moderate accuracy for the detection of extracapsular extension and seminal vesicle invasion. There were statistically no differences in specificity, sensitivity and accuracy between the two groups. Conclusion Overall staging accuracy, sensitivity and specificity were not significantly different between endorectal coil and pelvic phased-array coil MRI. PMID:22476999
Edla, Damodar Reddy; Kuppili, Venkatanareshbabu; Dharavath, Ramesh; Beechu, Nareshkumar Reddy
2017-01-01
Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues. PMID:28868148
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry
NASA Astrophysics Data System (ADS)
Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua
2018-04-01
Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.
Climatologies at high resolution for the earth’s land surface areas
Karger, Dirk Nikolaus; Conrad, Olaf; Böhner, Jürgen; Kawohl, Tobias; Kreft, Holger; Soria-Auza, Rodrigo Wilber; Zimmermann, Niklaus E.; Linder, H. Peter; Kessler, Michael
2017-01-01
High-resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA (Climatologies at high resolution for the earth’s land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30 arc sec. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height, with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979–2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better. PMID:28872642
Climatologies at high resolution for the earth's land surface areas
NASA Astrophysics Data System (ADS)
Karger, Dirk Nikolaus; Conrad, Olaf; Böhner, Jürgen; Kawohl, Tobias; Kreft, Holger; Soria-Auza, Rodrigo Wilber; Zimmermann, Niklaus E.; Linder, H. Peter; Kessler, Michael
2017-09-01
High-resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA (Climatologies at high resolution for the earth's land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30 arc sec. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height, with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979-2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better.
Validated method for quantification of genetically modified organisms in samples of maize flour.
Kunert, Renate; Gach, Johannes S; Vorauer-Uhl, Karola; Engel, Edwin; Katinger, Hermann
2006-02-08
Sensitive and accurate testing for trace amounts of biotechnology-derived DNA from plant material is the prerequisite for detection of 1% or 0.5% genetically modified ingredients in food products or raw materials thereof. Compared to ELISA detection of expressed proteins, real-time PCR (RT-PCR) amplification has easier sample preparation and detection limits are lower. Of the different methods of DNA preparation CTAB method with high flexibility in starting material and generation of sufficient DNA with relevant quality was chosen. Previous RT-PCR data generated with the SYBR green detection method showed that the method is highly sensitive to sample matrices and genomic DNA content influencing the interpretation of results. Therefore, this paper describes a real-time DNA quantification based on the TaqMan probe method, indicating high accuracy and sensitivity with detection limits of lower than 18 copies per sample applicable and comparable to highly purified plasmid standards as well as complex matrices of genomic DNA samples. The results were evaluated with ValiData for homology of variance, linearity, accuracy of the standard curve, and standard deviation.
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Leveraging transcript quantification for fast computation of alternative splicing profiles.
Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo
2015-09-01
Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Fat fraction bias correction using T1 estimates and flip angle mapping.
Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A
2014-01-01
To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.
Acceptability and feasibility of a virtual counselor (VICKY) to collect family health histories.
Wang, Catharine; Bickmore, Timothy; Bowen, Deborah J; Norkunas, Tricia; Campion, MaryAnn; Cabral, Howard; Winter, Michael; Paasche-Orlow, Michael
2015-10-01
To overcome literacy-related barriers in the collection of electronic family health histories, we developed an animated Virtual Counselor for Knowing your Family History, or VICKY. This study examined the acceptability and accuracy of using VICKY to collect family histories from underserved patients as compared with My Family Health Portrait (MFHP). Participants were recruited from a patient registry at a safety net hospital and randomized to use either VICKY or MFHP. Accuracy was determined by comparing tool-collected histories with those obtained by a genetic counselor. A total of 70 participants completed this study. Participants rated VICKY as easy to use (91%) and easy to follow (92%), would recommend VICKY to others (83%), and were highly satisfied (77%). VICKY identified 86% of first-degree relatives and 42% of second-degree relatives; combined accuracy was 55%. As compared with MFHP, VICKY identified a greater number of health conditions overall (49% with VICKY vs. 31% with MFHP; incidence rate ratio (IRR): 1.59; 95% confidence interval (95% CI): 1.13-2.25; P = 0.008), in particular, hypertension (47 vs. 15%; IRR: 3.18; 95% CI: 1.66-6.10; P = 0.001) and type 2 diabetes (54 vs. 22%; IRR: 2.47; 95% CI: 1.33-4.60; P = 0.004). These results demonstrate that technological support for documenting family history risks can be highly accepted, feasible, and effective.
Zhang, Ray; Isakow, Warren; Kollef, Marin H; Scott, Mitchell G
2017-09-01
Due to accuracy concerns, the Food and Drug Administration issued guidances to manufacturers that resulted in Center for Medicare and Medicaid Services stating that the use of meters in critically ill patients is "off-label" and constitutes "high complexity" testing. This is causing significant workflow problems in ICUs nationally. We wished to determine whether real-world accuracy of modern glucose meters is worse in ICU patients compared with non-ICU inpatients. We reviewed glucose results over the preceding 3 years, comparing results from paired glucose meter and central laboratory tests performed within 60 minutes of each other in ICU versus non-ICU settings. Seven ICU and 30 non-ICU wards at a 1,300-bed academic hospital in the United States. A total of 14,763 general medicine/surgery inpatients and 20,970 ICU inpatients. None. Compared meter results with near simultaneously performed laboratory results from the same patient by applying the 2016 U.S. Food and Drug Administration accuracy criteria, determining mean absolute relative difference and examining where paired results fell within the Parkes consensus error grid zones. A higher percentage of glucose meter results from ICUs than from non-ICUs passed 2016 Food and Drug Administration accuracy criteria (p < 10) when comparing meter results with laboratory results. At 1 minute, no meter result from ICUs posed dangerous or significant risk by error grid analysis, whereas at 10 minutes, less than 0.1% of ICU meter results did, which was not statistically different from non-ICU results. Real-world accuracy of modern glucose meters is at least as accurate in the ICU setting as in the non-ICU setting at our institution.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José
2018-03-28
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
High-accuracy user identification using EEG biometrics.
Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip
2016-08-01
We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.
Yoon, Paul K; Zihajehzadeh, Shaghayegh; Bong-Soo Kang; Park, Edward J
2015-08-01
This paper proposes a novel indoor localization method using the Bluetooth Low Energy (BLE) and an inertial measurement unit (IMU). The multipath and non-line-of-sight errors from low-power wireless localization systems commonly result in outliers, affecting the positioning accuracy. We address this problem by adaptively weighting the estimates from the IMU and BLE in our proposed cascaded Kalman filter (KF). The positioning accuracy is further improved with the Rauch-Tung-Striebel smoother. The performance of the proposed algorithm is compared against that of the standard KF experimentally. The results show that the proposed algorithm can maintain high accuracy for position tracking the sensor in the presence of the outliers.
A space system for high-accuracy global time and frequency comparison of clocks
NASA Technical Reports Server (NTRS)
Decher, R.; Allan, D. W.; Alley, C. O.; Vessot, R. F. C.; Winkler, G. M. R.
1981-01-01
A Space Shuttle experiment in which a hydrogen maser clock on board the Space Shuttle will be compared with clocks on the ground using two-way microwave and short pulse laser signals is described. The accuracy goal for the experiment is 1 nsec or better for the time transfer and 10 to the minus 14th power for the frequency comparison. A direct frequency comparison of primary standards at the 10 to the minus 14th power accuracy level is a unique feature of the proposed system. Both time and frequency transfer will be accomplished by microwave transmission, while the laser signals provide calibration of the system as well as subnanosecond time transfer.
Accuracy of frozen section in the diagnosis of ovarian tumours.
Toneva, F; Wright, H; Razvi, K
2012-07-01
The purpose of our retrospective study was to assess the accuracy of intraoperative frozen section diagnosis compared to final paraffin diagnosis in ovarian tumours at a gynaecological oncology centre in the UK. We analysed 66 cases and observed that frozen section consultation agreed with final paraffin diagnosis in 59 cases, which provided an accuracy of 89.4%. The overall sensitivity and specificity for all tumours were 85.4% and 100%, respectively. The positive predictive value (PPV) and negative predictive value (NPV) were 100% and 89.4%, respectively. Of the seven cases with discordant results, the majority were large, mucinous tumours, which is in line with previous studies. Our study demonstrated that despite its limitations, intraoperative frozen section has a high accuracy and sensitivity for assessing ovarian tumours; however, care needs to be taken with large, mucinous tumours.
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A; Gombos, Eva
2014-08-01
To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast-enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise, and fitting algorithms. We modeled the underlying dynamics of the tumor by an LDS and used the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist's segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared with the radiologist's segmentation and 82.1% accuracy and 100% sensitivity when compared with the CADstream output. The overlap of the algorithm output with the radiologist's segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72, respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC = 0.95. The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. © 2013 Wiley Periodicals, Inc.
Automatic Segmentation of Invasive Breast Carcinomas from DCE-MRI using Time Series Analysis
Jayender, Jagadaeesan; Chikarmane, Sona; Jolesz, Ferenc A.; Gombos, Eva
2013-01-01
Purpose Quantitative segmentation methods based on black-box modeling and pharmacokinetic modeling are highly dependent on imaging pulse sequence, timing of bolus injection, arterial input function, imaging noise and fitting algorithms. To accurately segment invasive ductal carcinomas (IDCs) from dynamic contrast enhanced MRI (DCE-MRI) using time series analysis based on linear dynamic system (LDS) modeling. Methods We modeled the underlying dynamics of the tumor by a LDS and use the system parameters to segment the carcinoma on the DCE-MRI. Twenty-four patients with biopsy-proven IDCs were analyzed. The lesions segmented by the algorithm were compared with an expert radiologist’s segmentation and the output of a commercial software, CADstream. The results are quantified in terms of the accuracy and sensitivity of detecting the lesion and the amount of overlap, measured in terms of the Dice similarity coefficient (DSC). Results The segmentation algorithm detected the tumor with 90% accuracy and 100% sensitivity when compared to the radiologist’s segmentation and 82.1% accuracy and 100% sensitivity when compared to the CADstream output. The overlap of the algorithm output with the radiologist’s segmentation and CADstream output, computed in terms of the DSC was 0.77 and 0.72 respectively. The algorithm also shows robust stability to imaging noise. Simulated imaging noise with zero mean and standard deviation equal to 25% of the base signal intensity was added to the DCE-MRI series. The amount of overlap between the tumor maps generated by the LDS-based algorithm from the noisy and original DCE-MRI was DSC=0.95. Conclusion The time-series analysis based segmentation algorithm provides high accuracy and sensitivity in delineating the regions of enhanced perfusion corresponding to tumor from DCE-MRI. PMID:24115175
Dy, Christopher J; Taylor, Samuel A; Patel, Ronak M; Kitay, Alison; Roberts, Timothy R; Daluiski, Aaron
2012-09-01
Recent emphasis on shared decision making and patient-centered research has increased the importance of patient education and health literacy. The internet is rapidly growing as a source of self-education for patients. However, concern exists over the quality, accuracy, and readability of the information. Our objective was to determine whether the quality, accuracy, and readability of information online about distal radius fractures vary with the search term. This was a prospective evaluation of 3 search engines using 3 different search terms of varying sophistication ("distal radius fracture," "wrist fracture," and "broken wrist"). We evaluated 70 unique Web sites for quality, accuracy, and readability. We used comparative statistics to determine whether the search term affected the quality, accuracy, and readability of the Web sites found. Three orthopedic surgeons independently gauged quality and accuracy of information using a set of predetermined scoring criteria. We evaluated the readability of the Web site using the Fleisch-Kincaid score for reading grade level. There were significant differences in the quality, accuracy, and readability of information found, depending on the search term. We found higher quality and accuracy resulted from the search term "distal radius fracture," particularly compared with Web sites resulting from the term "broken wrist." The reading level was higher than recommended in 65 of the 70 Web sites and was significantly higher when searching with "distal radius fracture" than "wrist fracture" or "broken wrist." There was no correlation between Web site reading level and quality or accuracy. The readability of information about distal radius fractures in most Web sites was higher than the recommended reading level for the general public. The quality and accuracy of the information found significantly varied with the sophistication of the search term used. Physicians, professional societies, and search engines should consider efforts to improve internet access to high-quality information at an understandable level. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.
2017-02-01
Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.
Comparing de novo genome assembly: the long and short of it.
Narzisi, Giuseppe; Mishra, Bud
2011-04-29
Recent advances in DNA sequencing technology and their focal role in Genome Wide Association Studies (GWAS) have rekindled a growing interest in the whole-genome sequence assembly (WGSA) problem, thereby, inundating the field with a plethora of new formalizations, algorithms, heuristics and implementations. And yet, scant attention has been paid to comparative assessments of these assemblers' quality and accuracy. No commonly accepted and standardized method for comparison exists yet. Even worse, widely used metrics to compare the assembled sequences emphasize only size, poorly capturing the contig quality and accuracy. This paper addresses these concerns: it highlights common anomalies in assembly accuracy through a rigorous study of several assemblers, compared under both standard metrics (N50, coverage, contig sizes, etc.) as well as a more comprehensive metric (Feature-Response Curves, FRC) that is introduced here; FRC transparently captures the trade-offs between contigs' quality against their sizes. For this purpose, most of the publicly available major sequence assemblers--both for low-coverage long (Sanger) and high-coverage short (Illumina) reads technologies--are compared. These assemblers are applied to microbial (Escherichia coli, Brucella, Wolbachia, Staphylococcus, Helicobacter) and partial human genome sequences (Chr. Y), using sequence reads of various read-lengths, coverages, accuracies, and with and without mate-pairs. It is hoped that, based on these evaluations, computational biologists will identify innovative sequence assembly paradigms, bioinformaticists will determine promising approaches for developing "next-generation" assemblers, and biotechnologists will formulate more meaningful design desiderata for sequencing technology platforms. A new software tool for computing the FRC metric has been developed and is available through the AMOS open-source consortium.
Thanh Noi, Phan; Kappas, Martin
2017-01-01
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km2 within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets. PMID:29271909
Thanh Noi, Phan; Kappas, Martin
2017-12-22
In previous classification studies, three non-parametric classifiers, Random Forest (RF), k-Nearest Neighbor (kNN), and Support Vector Machine (SVM), were reported as the foremost classifiers at producing high accuracies. However, only a few studies have compared the performances of these classifiers with different training sample sizes for the same remote sensing images, particularly the Sentinel-2 Multispectral Imager (MSI). In this study, we examined and compared the performances of the RF, kNN, and SVM classifiers for land use/cover classification using Sentinel-2 image data. An area of 30 × 30 km² within the Red River Delta of Vietnam with six land use/cover types was classified using 14 different training sample sizes, including balanced and imbalanced, from 50 to over 1250 pixels/class. All classification results showed a high overall accuracy (OA) ranging from 90% to 95%. Among the three classifiers and 14 sub-datasets, SVM produced the highest OA with the least sensitivity to the training sample sizes, followed consecutively by RF and kNN. In relation to the sample size, all three classifiers showed a similar and high OA (over 93.85%) when the training sample size was large enough, i.e., greater than 750 pixels/class or representing an area of approximately 0.25% of the total study area. The high accuracy was achieved with both imbalanced and balanced datasets.
Labrenz, Franziska; Icenhour, Adriane; Benson, Sven; Elsenbruch, Sigrid
2015-01-01
As a fundamental learning process, fear conditioning promotes the formation of associations between predictive cues and biologically significant signals. In its application to pain, conditioning may provide important insight into mechanisms underlying pain-related fear, although knowledge especially in interoceptive pain paradigms remains scarce. Furthermore, while the influence of contingency awareness on excitatory learning is subject of ongoing debate, its role in pain-related acquisition is poorly understood and essentially unknown regarding extinction as inhibitory learning. Therefore, we addressed the impact of contingency awareness on learned emotional responses to pain- and safety-predictive cues in a combined dataset of two pain-related conditioning studies. In total, 75 healthy participants underwent differential fear acquisition, during which rectal distensions as interoceptive unconditioned stimuli (US) were repeatedly paired with a predictive visual cue (conditioned stimulus; CS+) while another cue (CS−) was presented unpaired. During extinction, both CS were presented without US. CS valence, indicating learned emotional responses, and CS-US contingencies were assessed on visual analog scales (VAS). Based on an integrative measure of contingency accuracy, a median-split was performed to compare groups with low vs. high contingency accuracy regarding learned emotional responses. To investigate predictive value of contingency accuracy, regression analyses were conducted. Highly accurate individuals revealed more pronounced negative emotional responses to CS+ and increased positive responses to CS− when compared to participants with low contingency accuracy. Following extinction, highly accurate individuals had fully extinguished pain-predictive cue properties, while exhibiting persistent positive emotional responses to safety signals. In contrast, individuals with low accuracy revealed equally positive emotional responses to both, CS+ and CS−. Contingency accuracy predicted variance in the formation of positive responses to safety cues while no predictive value was found for danger cues following acquisition and for neither cue following extinction. Our findings underscore specific roles of learned danger and safety in pain-related acquisition and extinction. Contingency accuracy appears to distinctly impact learned emotional responses to safety and danger cues, supporting aversive learning to occur independently from CS-US awareness. The interplay of cognitive and emotional factors in shaping excitatory and inhibitory pain-related learning may contribute to altered pain processing, underscoring its clinical relevance in chronic pain. PMID:26640433
Labrenz, Franziska; Icenhour, Adriane; Benson, Sven; Elsenbruch, Sigrid
2015-01-01
As a fundamental learning process, fear conditioning promotes the formation of associations between predictive cues and biologically significant signals. In its application to pain, conditioning may provide important insight into mechanisms underlying pain-related fear, although knowledge especially in interoceptive pain paradigms remains scarce. Furthermore, while the influence of contingency awareness on excitatory learning is subject of ongoing debate, its role in pain-related acquisition is poorly understood and essentially unknown regarding extinction as inhibitory learning. Therefore, we addressed the impact of contingency awareness on learned emotional responses to pain- and safety-predictive cues in a combined dataset of two pain-related conditioning studies. In total, 75 healthy participants underwent differential fear acquisition, during which rectal distensions as interoceptive unconditioned stimuli (US) were repeatedly paired with a predictive visual cue (conditioned stimulus; CS(+)) while another cue (CS(-)) was presented unpaired. During extinction, both CS were presented without US. CS valence, indicating learned emotional responses, and CS-US contingencies were assessed on visual analog scales (VAS). Based on an integrative measure of contingency accuracy, a median-split was performed to compare groups with low vs. high contingency accuracy regarding learned emotional responses. To investigate predictive value of contingency accuracy, regression analyses were conducted. Highly accurate individuals revealed more pronounced negative emotional responses to CS(+) and increased positive responses to CS(-) when compared to participants with low contingency accuracy. Following extinction, highly accurate individuals had fully extinguished pain-predictive cue properties, while exhibiting persistent positive emotional responses to safety signals. In contrast, individuals with low accuracy revealed equally positive emotional responses to both, CS(+) and CS(-). Contingency accuracy predicted variance in the formation of positive responses to safety cues while no predictive value was found for danger cues following acquisition and for neither cue following extinction. Our findings underscore specific roles of learned danger and safety in pain-related acquisition and extinction. Contingency accuracy appears to distinctly impact learned emotional responses to safety and danger cues, supporting aversive learning to occur independently from CS-US awareness. The interplay of cognitive and emotional factors in shaping excitatory and inhibitory pain-related learning may contribute to altered pain processing, underscoring its clinical relevance in chronic pain.
Creating a Computer Adaptive Test Version of the Late-Life Function & Disability Instrument
Jette, Alan M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Moed, Richard
2009-01-01
Background This study applied Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies to develop a prototype function and disability assessment instrument for use in aging research. Herein, we report on the development of the CAT version of the Late-Life Function & Disability instrument (Late-Life FDI) and evaluate its psychometric properties. Methods We employed confirmatory factor analysis, IRT methods, validation, and computer simulation analyses of data collected from 671 older adults residing in residential care facilities. We compared accuracy, precision, and sensitivity to change of scores from CAT versions of two Late-Life FDI scales with scores from the fixed-form instrument. Score estimates from the prototype CAT versus the original instrument were compared in a sample of 40 older adults. Results Distinct function and disability domains were identified within the Late-Life FDI item bank and used to construct two prototype CAT scales. Using retrospective data, scores from computer simulations of the prototype CAT scales were highly correlated with scores from the original instrument. The results of computer simulation, accuracy, precision, and sensitivity to change of the CATs closely approximated those of the fixed-form scales, especially for the 10- or 15-item CAT versions. In the prospective study each CAT was administered in less than 3 minutes and CAT scores were highly correlated with scores generated from the original instrument. Conclusions CAT scores of the Late-Life FDI were highly comparable to those obtained from the full-length instrument with a small loss in accuracy, precision, and sensitivity to change. PMID:19038841
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
2011-01-01
Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043
Xu, Jian; Zhao, Hongliang; Wang, Xiaoying; Bai, Yuxiang; Liu, Liwen; Liu, Ying; Wei, Mengqi; Li, Jian; Zheng, Minwen
2014-10-01
To evaluate the diagnostic accuracy, image quality, and radiation dose of prospective electrocardiogram (ECG)-triggered high-pitch dual-source computed tomography (DSCT) in infants and young children with complex coarctation of the aorta (CoA). Forty pediatric patients aged < 4 years with suspected CoA underwent prospective ECG-triggered high-pitch DSCT angiography and transthoracic echocardiography (TTE). Surgery and/or conventional cardiac angiography (CCA) were performed in all patients. The diagnostic accuracy of DSCT angiography and TTE was compared to the surgical and/or CCA findings. The causes of misdiagnosis and miss were analyzed, and the advantages and limitation of both imaging modalities were evaluated. Image quality of DSCT was evaluated, and effective radiation dose was calculated. The sensitivity, specificity, positive predictive value, negative predictive value, and overall diagnostic accuracy of DSCT in evaluation of complex CoA were 92.37%, 98.51%, 97.32%, 93.57%, and 96.25%, respectively. There was a significant difference in the accuracy between DSCT and TTE (χ² = 9.9, P<.05). For a total of 80 extracardiac anomalies, the sensitivity (98.8%, 79/80) of DSCT was greater than that of TTE (62.5%; 50 of 80). On the contrary, for 38 cardiac anomalies, the sensitivity (78.9%, 30 of 38) of DSCT was lesser than that of TTE (100%; 38 of 38). The mean score of image quality was 4.27 ± 0.73. The mean effective radiation dose was 0.20 ± 0.09 mSv. Prospective ECG-triggered high-pitch DSCT may be a clinical feasible modality in the evaluation of pediatric patients with complex CoA, providing adequate image quality, high diagnostic accuracy, and low radiation dose. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Earth observation data based rapid flood-extent modelling for tsunami-devastated coastal areas
NASA Astrophysics Data System (ADS)
Hese, Sören; Heyer, Thomas
2016-04-01
Earth observation (EO)-based mapping and analysis of natural hazards plays a critical role in various aspects of post-disaster aid management. Spatial very high-resolution Earth observation data provide important information for managing post-tsunami activities on devastated land and monitoring re-cultivation and reconstruction. The automatic and fast use of high-resolution EO data for rapid mapping is, however, complicated by high spectral variability in densely populated urban areas and unpredictable textural and spectral land-surface changes. The present paper presents the results of the SENDAI project, which developed an automatic post-tsunami flood-extent modelling concept using RapidEye multispectral satellite data and ASTER Global Digital Elevation Model Version 2 (GDEM V2) data of the eastern coast of Japan (captured after the Tohoku earthquake). In this paper, the authors developed both a bathtub-modelling approach and a cost-distance approach, and integrated the roughness parameters of different land-use types to increase the accuracy of flood-extent modelling. Overall, the accuracy of the developed models reached 87-92%, depending on the analysed test site. The flood-modelling approach was explained and results were compared with published approaches. We came to the conclusion that the cost-factor-based approach reaches accuracy comparable to published results from hydrological modelling. However the proposed cost-factor approach is based on a much simpler dataset, which is available globally.
NASA Astrophysics Data System (ADS)
O'Neil, Gina L.; Goodall, Jonathan L.; Watson, Layne T.
2018-04-01
Wetlands are important ecosystems that provide many ecological benefits, and their quality and presence are protected by federal regulations. These regulations require wetland delineations, which can be costly and time-consuming to perform. Computer models can assist in this process, but lack the accuracy necessary for environmental planning-scale wetland identification. In this study, the potential for improvement of wetland identification models through modification of digital elevation model (DEM) derivatives, derived from high-resolution and increasingly available light detection and ranging (LiDAR) data, at a scale necessary for small-scale wetland delineations is evaluated. A novel approach of flow convergence modelling is presented where Topographic Wetness Index (TWI), curvature, and Cartographic Depth-to-Water index (DTW), are modified to better distinguish wetland from upland areas, combined with ancillary soil data, and used in a Random Forest classification. This approach is applied to four study sites in Virginia, implemented as an ArcGIS model. The model resulted in significant improvement in average wetland accuracy compared to the commonly used National Wetland Inventory (84.9% vs. 32.1%), at the expense of a moderately lower average non-wetland accuracy (85.6% vs. 98.0%) and average overall accuracy (85.6% vs. 92.0%). From this, we concluded that modifying TWI, curvature, and DTW provides more robust wetland and non-wetland signatures to the models by improving accuracy rates compared to classifications using the original indices. The resulting ArcGIS model is a general tool able to modify these local LiDAR DEM derivatives based on site characteristics to identify wetlands at a high resolution.
Sauder, Cara; Bretl, Michelle; Eadie, Tanya
2017-09-01
The purposes of this study were to (1) determine and compare the diagnostic accuracy of a single acoustic measure, smoothed cepstral peak prominence (CPPS), to predict voice disorder status from connected speech samples using two software systems: Analysis of Dysphonia in Speech and Voice (ADSV) and Praat; and (2) to determine the relationship between measures of CPPS generated from these programs. This is a retrospective cross-sectional study. Measures of CPPS were obtained from connected speech recordings of 100 subjects with voice disorders and 70 nondysphonic subjects without vocal complaints using commercially available ADSV and freely downloadable Praat software programs. Logistic regression and receiver operating characteristic (ROC) analyses were used to evaluate and compare the diagnostic accuracy of CPPS measures. Relationships between CPPS measures from the programs were determined. Results showed acceptable overall accuracy rates (75% accuracy, ADSV; 82% accuracy, Praat) and area under the ROC curves (area under the curve [AUC] = 0.81, ADSV; AUC = 0.91, Praat) for predicting voice disorder status, with slight differences in sensitivity and specificity. CPPS measures derived from Praat were uniquely predictive of disorder status above and beyond CPPS measures from ADSV (χ 2 (1) = 40.71, P < 0.001). CPPS measures from both programs were significantly and highly correlated (r = 0.88, P < 0.001). A single acoustic measure of CPPS was highly predictive of voice disorder status using either program. Clinicians may consider using CPPS to complement clinical voice evaluation and screening protocols. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational efficiency, the OTSEM is more efficient than the Fekete-based TSEM, although it is slightly costlier than the QSEM when a comparable numerical accuracy is required.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
Hwang, Sung Ho; Oh, Yu-Whan; Ham, Soo-Youn; Kang, Eun-Young; Lee, Ki Yeol
2015-01-01
To evaluate the influence of high-pitch mode (HPM) in dual-source computed tomography (DSCT) on the accuracy of three-dimensional (3D) volumetry for solid pulmonary nodules. A lung phantom implanted with 45 solid pulmonary nodules (n = 15 for each of 4-mm, 6-mm, and 8-mm in diameter) was scanned twice, first in conventional pitch mode (CPM) and then in HPM using DSCT. The relative percentage volume errors (RPEs) of 3D volumetry were compared between the HPM and CPM. In addition, the intermode volume variability (IVV) of 3D volumetry was calculated. In the measurement of the 6-mm and 8-mm nodules, there was no significant difference in RPE (p > 0.05, respectively) between the CPM and HPM (IVVs of 1.2 ± 0.9%, and 1.7 ± 1.5%, respectively). In the measurement of the 4-mm nodules, the mean RPE in the HPM (35.1 ± 7.4%) was significantly greater (p < 0.01) than that in the CPM (18.4 ± 5.3%), with an IVV of 13.1 ± 6.6%. However, the IVVs were in an acceptable range (< 25%), regardless of nodule size. The accuracy of 3D volumetry with HPM for solid pulmonary nodule is comparable to that with CPM. However, the use of HPM may adversely affect the accuracy of 3D volumetry for smaller (< 5 mm in diameter) nodule.
Purefoy Johnson, Jessica; Stack, John David; Rowan, Conor; Handel, Ian; O'Leary, John Mark
2017-05-22
To compare accuracy of the ultrasound-guided craniodorsal (CrD) approach with the dorsal (D) approach to the cervical articular process joints, and to evaluate the effect of the transducer, needle gauge, and operator experience. Cervical articular process joints from 14 cadaveric neck specimens were injected using either a D or CrD approach, a linear (13 MHx) or microconvex transducer (10 MHz), and an 18 or 20 gauge needle, by an experienced or inexperienced operator. Injectate consisted of an iodinated contrast material solution. Time taken for injection, number of redirects, and retrieval of synovial fluid were recorded. Accuracy was assessed using a scoring system for contrast seen on computed tomography (CT). The successful performance of intra-articular injections of contrast detected by CT using the D (61/68) and CrD (57/64) approaches was comparable. No significant effect of approach, transducer or needle gauge was observed on injection accuracy, time taken to perform injection, or number of redirects. The 18 gauge needle had a positive correlation with retrieval of synovial fluid. A positive learning curve was observed for the inexperienced operator. Both approaches to the cervical articular process joints were highly accurate. Ultrasound-guided injection of the cervical articular process joints is an easily-learnt technique for an inexperienced veterinarian. Either approach may be employed in the field with a high level of accuracy, using widely available equipment.
Accuracy of a hexapod parallel robot kinematics based external fixator.
Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer
2015-12-01
Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Koltsov, A. G.; Shamutdinov, A. H.; Blokhin, D. A.; Krivonos, E. V.
2018-01-01
A new classification of parallel kinematics mechanisms on symmetry coefficient, being proportional to mechanism stiffness and accuracy of the processing product using the technological equipment under study, is proposed. A new version of the Stewart platform with a high symmetry coefficient is presented for analysis. The workspace of the mechanism under study is described, this space being a complex solid figure. The workspace end points are reached by the center of the mobile platform which moves in parallel related to the base plate. Parameters affecting the processing accuracy, namely the static and dynamic stiffness, natural vibration frequencies are determined. The capability assessment of the mechanism operation under various loads, taking into account resonance phenomena at different points of the workspace, was conducted. The study proved that stiffness and therefore, processing accuracy with the use of the above mentioned mechanisms are comparable with the stiffness and accuracy of medium-sized series-produced machines.
Three-dimensional repositioning accuracy of semiadjustable articulator cast mounting systems.
Tan, Ming Yi; Ung, Justina Youlin; Low, Ada Hui Yin; Tan, En En; Tan, Keson Beng Choon
2014-10-01
In spite of its importance in prosthesis precision and quality, the 3-dimensional repositioning accuracy of cast mounting systems has not been reported in detail. The purpose of this study was to quantify the 3-dimensional repositioning accuracy of 6 selected cast mounting systems. Five magnetic mounting systems were compared with a conventional screw-on system. Six systems on 3 semiadjustable articulators were evaluated: Denar Mark II with conventional screw-on mounting plates (DENSCR) and magnetic mounting system with converter plates (DENCON); Denar Mark 330 with in-built magnetic mounting system (DENMAG) and disposable mounting plates; and Artex CP with blue (ARTBLU), white (ARTWHI), and black (ARTBLA) magnetic mounting plates. Test casts with 3 high-precision ceramic ball bearings at the mandibular central incisor (Point I) and the right and left second molar (Point R; Point L) positions were mounted on 5 mounting plates (n=5) for all 6 systems. Each cast was repositioned 10 times by 4 operators in random order. Nine linear (Ix, Iy, Iz; Rx, Ry, Rz; Lx, Ly, Lz) and 3 angular (anteroposterior, mediolateral, twisting) displacements were measured with a coordinate measuring machine. The mean standard deviations of the linear and angular displacements defined repositioning accuracy. Anteroposterior linear repositioning accuracy ranged from 23.8 ±3.7 μm (DENCON) to 4.9 ±3.2 μm (DENSCR). Mediolateral linear repositioning accuracy ranged from 46.0 ±8.0 μm (DENCON) to 3.7 ±1.5 μm (ARTBLU), and vertical linear repositioning accuracy ranged from 7.2 ±9.6 μm (DENMAG) to 1.5 ±0.9 μm (ARTBLU). Anteroposterior angular repositioning accuracy ranged from 0.0084 ±0.0080 degrees (DENCON) to 0.0020 ±0.0006 degrees (ARTBLU), and mediolateral angular repositioning accuracy ranged from 0.0120 ±0.0111 degrees (ARTWHI) to 0.0027 ±0.0008 degrees (ARTBLU). Twisting angular repositioning accuracy ranged from 0.0419 ±0.0176 degrees (DENCON) to 0.0042 ±0.0038 degrees (ARTBLA). One-way ANOVA found significant differences (P<.05) among all systems for Iy, Ry, Lx, Ly, and twisting. Generally, vertical linear displacements were less likely to reach the threshold of clinical detectability compared with anteroposterior or mediolateral linear displacements. The overall repositioning accuracy of DENSCR was comparable with 4 magnetic mounting systems (DENMAG, ARTBLU, ARTWHI, ARTBLA). DENCON exhibited the worst repositioning accuracy for Iy, Ry, Lx, Ly, and twisting. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Wellenberg, R H H; Boomsma, M F; van Osch, J A C; Vlassenbroek, A; Milles, J; Edens, M A; Streekstra, G J; Slump, C H; Maas, M
2017-05-01
To compare quantitative measures of image quality, in terms of CT number accuracy, noise, signal-to-noise-ratios (SNRs), and contrast-to-noise ratios (CNRs), at different dose levels with filtered-back-projection (FBP), iterative reconstruction (IR), and model-based iterative reconstruction (MBIR) alone and in combination with orthopedic metal artifact reduction (O-MAR) in a total hip arthroplasty (THA) phantom. Scans were acquired from high- to low-dose (CTDI vol : 40.0, 32.0, 24.0, 16.0, 8.0, and 4.0 mGy) at 120- and 140- kVp. Images were reconstructed using FBP, IR (iDose 4 level 2, 4, and 6) and MBIR (IMR, level 1, 2, and 3) with and without O-MAR. CT number accuracy in Hounsfield Units (HU), noise or standard deviation, SNRs, and CNRs were analyzed. The IMR technique showed lower noise levels (p < 0.01), higher SNRs (p < 0.001) and CNRs (p < 0.001) compared with FBP and iDose 4 in all acquisitions from high- to low-dose with constant CT numbers. O-MAR reduced noise (p < 0.01) and improved SNRs (p < 0.01) and CNRs (p < 0.001) while improving CT number accuracy only at a low dose. At the low dose of 4.0 mGy, IMR level 1, 2, and 3 showed 83%, 89%, and 95% lower noise values, a factor 6.0, 9.2, and 17.9 higher SNRs, and 5.7, 8.8, and 18.2 higher CNRs compared with FBP respectively. Based on quantitative analysis of CT number accuracy, noise values, SNRs, and CNRs, we conclude that the combined use of IMR and O-MAR enables a reduction in radiation dose of 83% compared with FBP and iDose 4 in the CT imaging of a THA phantom.
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.
Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D
2017-11-13
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation
NASA Astrophysics Data System (ADS)
Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.
2017-12-01
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Williams, John; Bialer, Meir; Johannessen, Svein I; Krämer, Günther; Levy, René; Mattson, Richard H; Perucca, Emilio; Patsalos, Philip N; Wilson, John F
2003-01-01
To assess interlaboratory variability in the determination of serum levels of new antiepileptic drugs (AEDs). Lyophilised serum samples containing clinically relevant concentrations of felbamate (FBM), gabapentin (GBP), lamotrigine (LTG), the monohydroxy derivative of oxcarbazepine (OCBZ; MHD), tiagabine (TGB), topiramate (TPM), and vigabatrin (VGB) were distributed monthly among 70 laboratories participating in the international Heathcontrol External Quality Assessment Scheme (EQAS). Assay results returned over a 15-month period were evaluated for precision and accuracy. The most frequently measured compound was LTG (65), followed by MHD (39), GBP (19), TPM (18), VGB (15), FBM (16), and TGB (8). High-performance liquid chromatography was the most commonly used assay technique for all drugs except for TPM, for which two thirds of laboratories used a commercial immunoassay. For all assay methods combined, precision was <11% for MHD, FBM, TPM, and LTG, close to 15% for GBP and VGB, and as high as 54% for TGB (p < 0.001). Mean accuracy values were <10% for all drugs other than TGB, for which measured values were on average 13.9% higher than spiked values, with a high variability around the mean (45%). No differences in precision and accuracy were found between methods, except for TPM, for which gas chromatography showed poorer accuracy compared with immunoassay and gas chromatography-mass spectrometry. With the notable exception of TGB, interlaboratory variability in the determination of new AEDs was comparable to that reported with older-generation agents. Poor assay performance is related more to individual operators than to the intrinsic characteristics of the method applied. Participation in an EQAS scheme is recommended to ensure adequate control of assay variability in therapeutic drug monitoring.
NASA Astrophysics Data System (ADS)
Franceschini, M. H. D.; Demattê, J. A. M.; da Silva Terra, F.; Vicente, L. E.; Bartholomeus, H.; de Souza Filho, C. R.
2015-06-01
Spectroscopic techniques have become attractive to assess soil properties because they are fast, require little labor and may reduce the amount of laboratory waste produced when compared to conventional methods. Imaging spectroscopy (IS) can have further advantages compared to laboratory or field proximal spectroscopic approaches such as providing spatially continuous information with a high density. However, the accuracy of IS derived predictions decreases when the spectral mixture of soil with other targets occurs. This paper evaluates the use of spectral data obtained by an airborne hyperspectral sensor (ProSpecTIR-VS - Aisa dual sensor) for prediction of physical and chemical properties of Brazilian highly weathered soils (i.e., Oxisols). A methodology to assess the soil spectral mixture is adapted and a progressive spectral dataset selection procedure, based on bare soil fractional cover, is proposed and tested. Satisfactory performances are obtained specially for the quantification of clay, sand and CEC using airborne sensor data (R2 of 0.77, 0.79 and 0.54; RPD of 2.14, 2.22 and 1.50, respectively), after spectral data selection is performed; although results obtained for laboratory data are more accurate (R2 of 0.92, 0.85 and 0.75; RPD of 3.52, 2.62 and 2.04, for clay, sand and CEC, respectively). Most importantly, predictions based on airborne-derived spectra for which the bare soil fractional cover is not taken into account show considerable lower accuracy, for example for clay, sand and CEC (RPD of 1.52, 1.64 and 1.16, respectively). Therefore, hyperspectral remotely sensed data can be used to predict topsoil properties of highly weathered soils, although spectral mixture of bare soil with vegetation must be considered in order to achieve an improved prediction accuracy.
SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, C; Chan, S; Lee, F
Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done withmore » FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.« less
Researches on High Accuracy Prediction Methods of Earth Orientation Parameters
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2015-09-01
The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients respectively, which are used to improve/re-evaluate the AR model. Comparing to the single AR model, the AR+Kalman method performs better in the prediction of UT1-UTC and ΔLOD, and the improvement in the prediction of the polar motion is significant. (3) Following the successful Earth Orientation Parameter Prediction Comparison Campaign (EOP PCC), the Earth Orientation Parameter Combination of Prediction Pilot Project (EOPC PPP) was sponsored in 2010. As one of the participants from China, we update and submit the short- and medium-term (1 to 90 days) EOP predictions every day. From the current comparative statistics, our prediction accuracy is on the medium international level. We will carry out more innovative researches to improve the EOP forecast accuracy and enhance our level in EOP forecast.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E
2013-07-01
The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
Lee, David; Park, Sang-Hoon; Lee, Sang-Goog
2017-10-07
In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.
Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong
2018-01-01
The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.
ERIC Educational Resources Information Center
Mazefsky, Carla A.; Oswald, Donald P.
2007-01-01
This study compared emotion perception accuracy between children with Asperger's syndrome (AS) and high-functioning autism (HFA). Thirty children were diagnosed with AS or HFA based on empirically supported diagnostic criteria and administered an emotion perception test consisting of facial expressions and tone of voice cues that varied in…
NASA Technical Reports Server (NTRS)
Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob
2012-01-01
Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.
Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions
NASA Technical Reports Server (NTRS)
Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob
2012-01-01
Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
Zhang, Hui-Rong; Yin, Le-Feng; Liu, Yan-Li; Yan, Li-Yi; Wang, Ning; Liu, Gang; An, Xiao-Li; Liu, Bin
2018-04-01
The aim of this study is to build a digital dental model with cone beam computed tomography (CBCT), to fabricate a virtual model via 3D printing, and to determine the accuracy of 3D printing dental model by comparing the result with a traditional dental cast. CBCT of orthodontic patients was obtained to build a digital dental model by using Mimics 10.01 and Geomagic studio software. The 3D virtual models were fabricated via fused deposition modeling technique (FDM). The 3D virtual models were compared with the traditional cast models by using a Vernier caliper. The measurements used for comparison included the width of each tooth, the length and width of the maxillary and mandibular arches, and the length of the posterior dental crest. 3D printing models had higher accuracy compared with the traditional cast models. The results of the paired t-test of all data showed that no statistically significant difference was observed between the two groups (P>0.05). Dental digital models built with CBCT realize the digital storage of patients' dental condition. The virtual dental model fabricated via 3D printing avoids traditional impression and simplifies the clinical examination process. The 3D printing dental models produced via FDM show a high degree of accuracy. Thus, these models are appropriate for clinical practice.
Anand, Rishi; Gorev, Maxim V; Poghosyan, Hermine; Pothier, Lindsay; Matkins, John; Kotler, Gregory; Moroz, Sarah; Armstrong, James; Nemtsov, Sergei V; Orlov, Michael V
2016-08-01
To compare the efficacy and accuracy of rotational angiography with three-dimensional reconstruction (3DATG) image merged with electro-anatomical mapping (EAM) vs. CT-EAM. A prospective, randomized, parallel, two-center study conducted in 36 patients (25 men, age 65 ± 10 years) undergoing AF ablation (33 % paroxysmal, 67 % persistent) guided by 3DATG (group 1) vs. CT (group 2) image fusion with EAM. 3DATG was performed on the Philips Allura Xper FD 10 system. Procedural characteristics including time, radiation exposure, outcome, and navigation accuracy were compared between two groups. There was no significant difference between the groups in total procedure duration or time spent for various procedural steps. Minor differences in procedural characteristics were present between two centers. Segmentation and fusion time for 3DATG or CT-EAM was short and similar between both centers. Accuracy of navigation guided by either method was high and did not depend on left atrial size. Maintenance of sinus rhythm between the two groups was no different up to 24 months of follow-up. This study did not find superiority of 3DATG-EAM image merge to guide AF ablation when compared to CT-EAM fusion. Both merging techniques result in similar navigation accuracy.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
The effect of letter string length and report condition on letter recognition accuracy.
Raghunandan, Avesh; Karmazinaite, Berta; Rossow, Andrea S
Letter sequence recognition accuracy has been postulated to be limited primarily by low-level visual factors. The influence of high level factors such as visual memory (load and decay) has been largely overlooked. This study provides insight into the role of these factors by investigating the interaction between letter sequence recognition accuracy, letter string length and report condition. Letter sequence recognition accuracy for trigrams and pentagrams were measured in 10 adult subjects for two report conditions. In the complete report condition subjects reported all 3 or all 5 letters comprising trigrams and pentagrams, respectively. In the partial report condition, subjects reported only a single letter in the trigram or pentagram. Letters were presented for 100ms and rendered in high contrast, using black lowercase Courier font that subtended 0.4° at the fixation distance of 0.57m. Letter sequence recognition accuracy was consistently higher for trigrams compared to pentagrams especially for letter positions away from fixation. While partial report increased recognition accuracy in both string length conditions, the effect was larger for pentagrams, and most evident for the final letter positions within trigrams and pentagrams. The effect of partial report on recognition accuracy for the final letter positions increased as eccentricity increased away from fixation, and was independent of the inner/outer position of a letter. Higher-level visual memory functions (memory load and decay) play a role in letter sequence recognition accuracy. There is also suggestion of additional delays imposed on memory encoding by crowded letter elements. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Accuracy evaluation of 3D lidar data from small UAV
NASA Astrophysics Data System (ADS)
Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav
2015-10-01
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter.
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-14
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the "Velocity and Attitude" matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment.
Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao
2015-01-01
In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. PMID:25977299
SPHINX--an algorithm for taxonomic binning of metagenomic sequences.
Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S
2011-01-01
Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.
Rapid Transfer Alignment of MEMS SINS Based on Adaptive Incremental Kalman Filter
Chu, Hairong; Sun, Tingting; Zhang, Baiqiang; Zhang, Hongwei; Chen, Yang
2017-01-01
In airborne MEMS SINS transfer alignment, the error of MEMS IMU is highly environment-dependent and the parameters of the system model are also uncertain, which may lead to large error and bad convergence of the Kalman filter. In order to solve this problem, an improved adaptive incremental Kalman filter (AIKF) algorithm is proposed. First, the model of SINS transfer alignment is defined based on the “Velocity and Attitude” matching method. Then the detailed algorithm progress of AIKF and its recurrence formulas are presented. The performance and calculation amount of AKF and AIKF are also compared. Finally, a simulation test is designed to verify the accuracy and the rapidity of the AIKF algorithm by comparing it with KF and AKF. The results show that the AIKF algorithm has better estimation accuracy and shorter convergence time, especially for the bias of the gyroscope and the accelerometer, which can meet the accuracy and rapidity requirement of transfer alignment. PMID:28098829
NASA Astrophysics Data System (ADS)
Yang, Zhiyong; Zhang, Jianbao; Xie, Yongjie; Zhang, Boming; Sun, Baogang; Guo, Hongjun
2017-12-01
Carbon fiber reinforced polymer, CFRP, composite materials have been used to fabricate space mirror. Usually the composite space mirror can completely replicate the high-precision surface of mould by replication process, but the actual surface accuracy of replicated space mirror is always reduced, still needed further study. We emphatically studied the error caused by layup and curing on the surface accuracy of space mirror through comparative experiments and analyses, the layup and curing influence factors include curing temperature, cooling rate of curing, method of prepreg lay-up, and area weight of fiber. Focusing on the four factors, we analyzed the error influence rule and put forward corresponding control measures to improve the surface figure of space mirror. For comparative analysis, six CFRP composite mirrors were fabricated and surface profile of mirrors were measured. Four guiding control measures were described here. Curing process of composite space mirror is our next focus.
Greenhouse, Bryan; Dokomajilar, Christian; Hubbard, Alan; Rosenthal, Philip J; Dorsey, Grant
2007-09-01
Antimalarial clinical trials use genotyping techniques to distinguish new infection from recrudescence. In areas of high transmission, the accuracy of genotyping may be compromised due to the high number of infecting parasite strains. We compared the accuracies of genotyping methods, using up to six genotyping markers, to assign outcomes for two large antimalarial trials performed in areas of Africa with different transmission intensities. We then estimated the probability of genotyping misclassification and its effect on trial results. At a moderate-transmission site, three genotyping markers were sufficient to generate accurate estimates of treatment failure. At a high-transmission site, even with six markers, estimates of treatment failure were 20% for amodiaquine plus artesunate and 17% for artemether-lumefantrine, regimens expected to be highly efficacious. Of the observed treatment failures for these two regimens, we estimated that at least 45% and 35%, respectively, were new infections misclassified as recrudescences. Increasing the number of genotyping markers improved the ability to distinguish new infection from recrudescence at a moderate-transmission site, but using six markers appeared inadequate at a high-transmission site. Genotyping-adjusted estimates of treatment failure from high-transmission sites may represent substantial overestimates of the true risk of treatment failure.
OpenMEEG: opensource software for quasistatic bioelectromagnetics.
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2010-09-06
Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.
OpenMEEG: opensource software for quasistatic bioelectromagnetics
2010-01-01
Background Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. Methods We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. Results We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. Conclusions This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort. PMID:20819204
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range
de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander
2016-01-01
Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334
NASA Astrophysics Data System (ADS)
Yang, Zhiyong; Tang, Zhanwen; Xie, Yongjie; Shi, Hanqiao; Zhang, Boming; Guo, Hongjun
2018-02-01
Composite space mirror can completely replicate the high-precision surface of mould by replication process, but the actual surface accuracy of the replication composite mirror always decreases. Lamina thickness of prepreg affects the layers and layup sequence of composite space mirror, and which would affect surface accuracy of space mirror. In our research, two groups of contrasting cases through finite element analyses (FEA) and comparative experiments were studied; the effect of different lamina thicknesses of prepreg and corresponding lay-up sequences was focused as well. We describe a special analysis model, validated process and result analysis. The simulated and measured surface figures both get the same conclusion. Reducing lamina thickness of prepreg used in replicating composite space mirror is propitious to optimal design of layup sequence for fabricating composite mirror, and could improve its surface accuracy.
A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.
2018-06-01
A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.
NASA Astrophysics Data System (ADS)
Huang, S. S.; Huang, C. F.; Huang, K. N.; Young, M. S.
2002-10-01
A highly accurate binary frequency shift-keyed (BFSK) ultrasonic distance measurement system (UDMS) for use in isothermal air is described. This article presents an efficient algorithm which combines both the time-of-flight (TOF) method and the phase-shift method. The proposed method can obtain larger range measurement than the phase-shift method and also get higher accuracy compared with the TOF method. A single-chip microcomputer-based BFSK signal generator and phase detector was designed to record and compute the TOF, two phase shifts, and the resulting distance, which were then sent to either an LCD to display or a PC to calibrate. Experiments were done in air using BFSK with the frequencies of 40 and 41 kHz. Distance resolution of 0.05% of the wavelength corresponding to the frequency of 40 kHz was obtained. The range accuracy was found to be within ±0.05 mm at a range of over 6000 mm. The main advantages of this UDMS system are high resolution, low cost, narrow bandwidth requirement, and ease of implementation.
Variability of Diabetes Alert Dog Accuracy in a Real-World Setting
Gonder-Frederick, Linda A.; Grabman, Jesse H.; Shepard, Jaclyn A.; Tripathi, Anand V.; Ducar, Dallas M.; McElgunn, Zachary R.
2017-01-01
Background: Diabetes alert dogs (DADs) are growing in popularity as an alternative method of glucose monitoring for individuals with type 1 diabetes (T1D). Only a few empirical studies have assessed DAD accuracy, with inconsistent results. The present study examined DAD accuracy and variability in performance in real-world conditions using a convenience sample of owner-report diaries. Method: Eighteen DAD owners (44.4% female; 77.8% youth) with T1D completed diaries of DAD alerts during the first year after placement. Diary entries included daily BG readings and DAD alerts. For each DAD, percentage hits (alert with BG ≤ 5.0 or ≥ 11.1 mmol/L; ≤90 or ≥200 mg/dl), percentage misses (no alert with BG out of range), and percentage false alarms (alert with BG in range) were computed. Sensitivity, specificity, positive likelihood ratio (PLR), and true positive rates were also calculated. Results: Overall comparison of DAD Hits to Misses yielded significantly more Hits for both low and high BG. Total sensitivity was 57.0%, with increased sensitivity to low BG (59.2%) compared to high BG (56.1%). Total specificity was 49.3% and PLR = 1.12. However, high variability in accuracy was observed across DADs, with low BG sensitivity ranging from 33% to 100%. Number of DADs achieving ≥ 60%, 65% and 70% true positive rates was 71%, 50% and 44%, respectively. Conclusions: DADs may be able to detect out-of-range BG, but variability across DADs is evident. Larger trials are needed to further assess DAD accuracy and to identify factors influencing the complexity of DAD accuracy in BG detection. PMID:28627305
Arya, Ravindra; Wilson, J Adam; Vannest, Jennifer; Byars, Anna W; Greiner, Hansel M; Buroker, Jason; Fujiwara, Hisako; Mangano, Francesco T; Holland, Katherine D; Horn, Paul S; Crone, Nathan E; Rose, Douglas F
2015-02-01
This study describes development of a novel language mapping approach using high-γ modulation in electrocorticograph (ECoG) during spontaneous conversation, and its comparison with electrical cortical stimulation (ECS) in childhood-onset drug-resistant epilepsy. Patients undergoing invasive pre-surgical monitoring and able to converse with the investigator were eligible. ECoG signals and synchronized audio were acquired during quiet baseline and during natural conversation between investigator and the patient. Using Signal Modeling for Real-time Identification and Event Detection (SIGFRIED) procedure, a statistical model for baseline high-γ (70-116 Hz) power, and a single score for each channel representing the probability that the power features in the experimental signal window belonged to the baseline model, were calculated. Electrodes with significant high-γ responses (HGS) were plotted on the 3D cortical model. Sensitivity, specificity, positive and negative predictive values (PPV, NPV), and classification accuracy were calculated compared to ECS. Seven patients were included (4 males, mean age 10.28 ± 4.07 years). Significant high-γ responses were observed in classic language areas in the left hemisphere plus in some homologous right hemispheric areas. Compared with clinical standard ECS mapping, the sensitivity and specificity of HGS mapping was 88.89% and 63.64%, respectively, and PPV and NPV were 35.29% and 96.25%, with an overall accuracy of 68.24%. HGS mapping was able to correctly determine all ECS+ sites in 6 of 7 patients and all false-sites (ECS+, HGS- for visual naming, n = 3) were attributable to only 1 patient. This study supports the feasibility of language mapping with ECoG HGS during spontaneous conversation, and its accuracy compared to traditional ECS. Given long-standing concerns about ecological validity of ECS mapping of cued language tasks, and difficulties encountered with its use in children, ECoG mapping of spontaneous language may provide a valid alternative for clinical use. Copyright © 2014 Elsevier B.V. All rights reserved.
Ma, Zhiyuan; Luo, Guangchun; Qin, Ke; Wang, Nan; Niu, Weina
2018-03-01
Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy.
NASA Astrophysics Data System (ADS)
Vivio, Francesco; Fanelli, Pierluigi; Ferracci, Michele
2018-03-01
In aeronautical and automotive industries the use of rivets for applications requiring several joining points is now very common. In spite of a very simple shape, a riveted junction has many contact surfaces and stress concentrations that make the local stiffness very difficult to be calculated. To overcome this difficulty, commonly finite element models with very dense meshes are performed for single joint analysis because the accuracy is crucial for a correct structural analysis. Anyhow, when several riveted joints are present, the simulation becomes computationally too heavy and usually significant restrictions to joint modelling are introduced, sacrificing the accuracy of local stiffness evaluation. In this paper, we tested the accuracy of a rivet finite element presented in previous works by the authors. The structural behaviour of a lap joint specimen with a rivet joining is simulated numerically and compared to experimental measurements. The Rivet Element, based on a closed-form solution of a reference theoretical model of the rivet joint, simulates local and overall stiffness of the junction combining high accuracy with low degrees of freedom contribution. In this paper the Rivet Element performances are compared to that of a FE non-linear model of the rivet, built with solid elements and dense mesh, and to experimental data. The promising results reported allow to consider the Rivet Element able to simulate, with a great accuracy, actual structures with several rivet connections.
Semaan, Hassan; Bazerbashi, Mohamad F; Siesel, Geoffrey; Aldinger, Paul; Obri, Tawfik
2018-03-01
To determine the accuracy and non-detection rate of cancer related findings (CRFs) on follow-up non-contrast-enhanced CT (NECT) versus contrast-enhanced CT (CECT) images of the abdomen in patients with a known cancer diagnosis. A retrospective review of 352 consecutive CTs of the abdomen performed with and without IV contrast between March 2010 and October 2014 for follow-up of cancer was included. Two radiologists independently assessed the NECT portions of the studies. The reader was provided the primary cancer diagnosis and access to the most recent prior NECT study. The accuracy and non-detection rates were determined by comparing our results to the archived reports as a gold standard. A total of 383 CRFs were found in the archived reports of the 352 abdominal CTs. The average non-detection rate for the NECTs compared to the CECTs was 3.0% (11.5/383) with an accuracy of 97.0% (371.5/383) in identifying CRFs. The most common findings missed were vascular thrombosis with a non-detection rate of 100%. The accuracy for non-vascular CRFs was 99.1%. Follow-up NECT abdomen studies are highly accurate in the detection of CRFs in patients with an established cancer diagnosis, except in cases where vascular involvement is suspected.
Bommert, Andrea; Rahnenführer, Jörg; Lang, Michel
2017-01-01
Finding a good predictive model for a high-dimensional data set can be challenging. For genetic data, it is not only important to find a model with high predictive accuracy, but it is also important that this model uses only few features and that the selection of these features is stable. This is because, in bioinformatics, the models are used not only for prediction but also for drawing biological conclusions which makes the interpretability and reliability of the model crucial. We suggest using three target criteria when fitting a predictive model to a high-dimensional data set: the classification accuracy, the stability of the feature selection, and the number of chosen features. As it is unclear which measure is best for evaluating the stability, we first compare a variety of stability measures. We conclude that the Pearson correlation has the best theoretical and empirical properties. Also, we find that for the stability assessment behaviour it is most important that a measure contains a correction for chance or large numbers of chosen features. Then, we analyse Pareto fronts and conclude that it is possible to find models with a stable selection of few features without losing much predictive accuracy.
Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan
2009-01-01
Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971
Accuracy and Calibration of High Explosive Thermodynamic Equations of State
2010-08-01
physics descriptions, but can also mean increased calibration complexity. A generalized extent of aluminum reaction, the Jones-Wilkins-Lee ( JWL ) based...predictions compared to experiments 3 3 PAX-30 JWL and JWLB cylinder test predictions compared to experiments 4 4 PAX-29 JWL and JWLB cylinder test...predictions compared to experiments 5 5 Experiment and modeling comparisons for HMX/AI 85/15 7 TABLES 1 LX-14 JWL and JWLB cylinder test velocity
a Gsa-Svm Hybrid System for Classification of Binary Problems
NASA Astrophysics Data System (ADS)
Sarafrazi, Soroor; Nezamabadi-pour, Hossein; Barahman, Mojgan
2011-06-01
This paperhybridizesgravitational search algorithm (GSA) with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve the classification accuracy in binary problems. GSA is an optimization heuristic toolused to optimize the value of SVM kernel parameter (in this paper, radial basis function (RBF) is chosen as the kernel function). The experimental results show that this newapproach can achieve high classification accuracy and is comparable to or better than the particle swarm optimization (PSO)-SVM and genetic algorithm (GA)-SVM, which are two hybrid systems for classification.
A Spiking Neural Network in sEMG Feature Extraction.
Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor
2015-11-03
We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.
Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory
NASA Astrophysics Data System (ADS)
Pei, Di; Yue, Jianhai; Jiao, Jing
2017-10-01
This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.
Parameter estimation using weighted total least squares in the two-compartment exchange model.
Garpebring, Anders; Löfstedt, Tommy
2018-01-01
The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Radomski, M. S.; Doll, C. E.
1991-01-01
This investigation concerns the effects on Ocean Topography Experiment (TOPEX) spacecraft operational orbit determination of ionospheric refraction error affecting tracking measurements from the Tracking and Data Relay Satellite System (TDRSS). Although tracking error from this source is mitigated by the high frequencies (K-band) used for the space-to-ground links and by the high altitudes for the space-to-space links, these effects are of concern for the relatively high-altitude (1334 kilometers) TOPEX mission. This concern is due to the accuracy required for operational orbit-determination by the Goddard Space Flight Center (GSFC) and to the expectation that solar activity will still be relatively high at TOPEX launch in mid-1992. The ionospheric refraction error on S-band space-to-space links was calculated by a prototype observation-correction algorithm using the Bent model of ionosphere electron densities implemented in the context of the Goddard Trajectory Determination System (GTDS). Orbit determination error was evaluated by comparing parallel TOPEX orbit solutions, applying and omitting the correction, using the same simulated TDRSS tracking observations. The tracking scenarios simulated those planned for the observation phase of the TOPEX mission, with a preponderance of one-way return-link Doppler measurements. The results of the analysis showed most TOPEX operational accuracy requirements to be little affected by space-to-space ionospheric error. The determination of along-track velocity changes after ground-track adjustment maneuvers, however, is significantly affected when compared with the stringent 0.1-millimeter-per-second accuracy requirements, assuming uncoupled premaneuver and postmaneuver orbit determination. Space-to-space ionospheric refraction on the 24-hour postmaneuver arc alone causes 0.2 millimeter-per-second errors in along-track delta-v determination using uncoupled solutions. Coupling the premaneuver and postmaneuver solutions, however, appears likely to reduce this figure substantially. Plans and recommendations for response to these findings are presented.
d'Assuncao, Jefferson; Irwig, Les; Macaskill, Petra; Chan, Siew F; Richards, Adele; Farnsworth, Annabelle
2007-01-01
Objective To compare the accuracy of liquid based cytology using the computerised ThinPrep Imager with that of manually read conventional cytology. Design Prospective study. Setting Pathology laboratory in Sydney, Australia. Participants 55 164 split sample pairs (liquid based sample collected after conventional sample from one collection) from consecutive samples of women choosing both types of cytology and whose specimens were examined between August 2004 and June 2005. Main outcome measures Primary outcome was accuracy of slides for detecting squamous lesions. Secondary outcomes were rate of unsatisfactory slides, distribution of squamous cytological classifications, and accuracy of detecting glandular lesions. Results Fewer unsatisfactory slides were found for imager read cytology than for conventional cytology (1.8% v 3.1%; P<0.001). More slides were classified as abnormal by imager read cytology (7.4% v 6.0% overall and 2.8% v 2.2% for cervical intraepithelial neoplasia of grade 1 or higher). Among 550 patients in whom imager read cytology was cervical intraepithelial neoplasia grade 1 or higher and conventional cytology was less severe than grade 1, 133 of 380 biopsy samples taken were high grade histology. Among 294 patients in whom imager read cytology was less severe than cervical intraepithelial neoplasia grade 1 and conventional cytology was grade 1 or higher, 62 of 210 biopsy samples taken were high grade histology. Imager read cytology therefore detected 71 more cases of high grade histology than did conventional cytology, resulting from 170 more biopsies. Similar results were found when one pathologist reread the slides, masked to cytology results. Conclusion The ThinPrep Imager detects 1.29 more cases of histological high grade squamous disease per 1000 women screened than conventional cytology, with cervical intraepithelial neoplasia grade 1 as the threshold for referral to colposcopy. More imager read slides than conventional slides were satisfactory for examination and more contained low grade cytological abnormalities. PMID:17604301
Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology
van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin
2012-01-01
Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Arya, Ravindra; Wilson, J Adam; Fujiwara, Hisako; Rozhkov, Leonid; Leach, James L; Byars, Anna W; Greiner, Hansel M; Vannest, Jennifer; Buroker, Jason; Milsap, Griffin; Ervin, Brian; Minai, Ali; Horn, Paul S; Holland, Katherine D; Mangano, Francesco T; Crone, Nathan E; Rose, Douglas F
2017-04-01
This prospective study compared presurgical language localization with visual naming-associated high-γ modulation (HGM) and conventional electrical cortical stimulation (ECS) in children with intracranial electrodes. Patients with drug-resistant epilepsy who were undergoing intracranial monitoring were included if able to name pictures. Electrocorticography (ECoG) signals were recorded during picture naming (overt and covert) and quiet baseline. For each electrode the likelihood of high-γ (70-116 Hz) power modulation during naming task relative to the baseline was estimated. Electrodes with significant HGM were plotted on a three-dimensional (3D) cortical surface model. Sensitivity, specificity, and accuracy were calculated compared to clinical ECS. Seventeen patients with mean age of 11.3 years (range 4-19) were included. In patients with left hemisphere electrodes (n = 10), HGM during overt naming showed high specificity (0.81, 95% confidence interval [CI] 0.78-0.85), and accuracy (0.71, 95% CI 0.66-0.75, p < 0.001), but modest sensitivity (0.47) when ECS interference with naming (aphasia or paraphasic errors) and/or oral motor function was regarded as the gold standard. Similar results were reproduced by comparing covert naming-associated HGM with ECS naming sites. With right hemisphere electrodes (n = 7), no ECS-naming deficits were seen without interference with oral-motor function. HGM mapping showed a high specificity (0.81, 95% CI 0.78-0.84), and accuracy (0.76, 95% CI 0.71-0.81, p = 0.006), but modest sensitivity (0.44) compared to ECS interference with oral-motor function. Naming-associated ECoG HGM was consistently observed over Broca's area (left posterior inferior-frontal gyrus), bilateral oral/facial motor cortex, and sometimes over the temporal pole. This study supports the use of ECoG HGM mapping in children in whom adverse events preclude ECS, or as a screening method to prioritize electrodes for ECS testing. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Evaluating the Quality, Accuracy, and Readability of Online Resources Pertaining to Hallux Valgus.
Tartaglione, Jason P; Rosenbaum, Andrew J; Abousayed, Mostafa; Hushmendy, Shazaan F; DiPreta, John A
2016-02-01
The Internet is one of the most widely utilized resources for health-related information. Evaluation of the medical literature suggests that the quality and accuracy of these resources are poor and written at inappropriately high reading levels. The purpose of our study was to evaluate the quality, accuracy, and readability of online resources pertaining to hallux valgus. Two search terms ("hallux valgus" and "bunion") were entered into Google, Yahoo, and Bing. With the use of scoring criteria specific to hallux valgus, the quality and accuracy of online information related to hallux valgus was evaluated by 3 reviewers. The Flesch-Kincaid score was used to determine readability. Statistical analysis was performed with t tests and significance was determined by P values <.05. Sixty-two unique websites were evaluated. Quality was significantly higher with use of the search term "bunion" as compared to "hallux valgus" (P = .045). Quality and accuracy were significantly higher in resources authored by physicians as compared to nonphysicians (quality, P = .04; accuracy, P < .001) and websites without commercial bias (quality, P = .038; accuracy, P = .011). However, the reading level was significantly more advanced for websites authored by physicians (P = .035). Websites written above an eighth-grade reading level were significantly more accurate than those written at or below an eighth-grade reading level (P = .032). The overall quality of online information related to hallux valgus is poor and written at inappropriate reading levels. Furthermore, the search term used, authorship, and presence of commercial bias influence the value of these materials. It is important for orthopaedic surgeons to become familiar with patient education materials, so that appropriate recommendations can be made regarding valuable resources. Level IV. © 2015 The Author(s).
Dr Google: The readability and accuracy of patient education websites for Graves' disease treatment.
Purdy, Amanda C; Idriss, Almoatazbellah; Ahern, Susan; Lin, Elizabeth; Elfenbein, Dawn M
2017-11-01
National guidelines emphasize the importance of incorporating patient preferences into the recommendations for the treatment of Graves' disease. Many patients use the Internet to obtain health information, and search results can affect their treatment decisions. This study compares the readability and accuracy of patient-oriented online resources for the treatment of Graves' disease by website affiliation and treatment modality. A systematic Internet search was used to identify the top websites discussing the treatment of Graves' disease. Readability was measured using 5 standardized tests. Accuracy was assessed by a blinded, expert panel, which scored the accuracy of sites on a scale of 1 to 5. Mean readability and accuracy scores were compared among website affiliations and treatment modalities. We identified 13 unique websites, including 2 academic, 2 government, 5 nonprofit, and 4 private sites. There was a difference in both readability (mean 13.2, range 9.1-15.7, P = .003) and accuracy (mean 4.04, range 2.75-4.50, P = .019) based on website affiliation. Government sites (mean readability 11.1) were easier to read than academic (14.3, P < .01), nonprofit (13.9, P < .01), and private sites (13.5, P < .05). Academic sites (mean accuracy 4.50) were more accurate than private sites (3.56, P < .05). Online patient resources for the treatment of Graves' disease are written at an inappropriately high reading level. Academic sites contain both the most accurate and the most difficult to read information. Private sites represented the majority of our top results but contained the least accurate information. Copyright © 2017 Elsevier Inc. All rights reserved.
Joshi, Hasit; Shah, Ronak; Prajapati, Jayesh; Bhangdiya, Vipin; Shah, Jayal; Kandre, Yogini; Shah, Komal
2016-01-01
Objective: To compare the diagnostic accuracy of multi-slice computed tomography (MSCT) angiography with conventional angiography in patients undergoing major noncoronary cardiac surgeries. Materials and Methods: We studied fifty major noncoronary cardiac surgery patients scheduled for invasive coronary angiography, 29 (58%) female and 21 (42%) male. Inclusion criteria of the study were age of the patients ≥40 years, having low or intermediate probability of coronary artery disease (CAD), left ventricular ejection fraction (LVEF) >35%, and patient giving informed consent for undergoing MSCT and conventional coronary angiography. The patients with LVEF <35%, high pretest probability of CAD, and hemodynamically unstable were excluded from the study. Results: The diagnostic accuracy of CT coronary angiography was evaluated regarding true positive, true negative values. The overall sensitivity and specificity of CT angiography technique was 100% (95% confidence interval [CI]: 39.76%–100%) and 91.30% (95% CI: 79.21%–97.58%). The positive (50%; 95% CI: 15.70%–84.30%) and negative predictive values (100%; 95% CI: 91.59%–100%) of CT angiography were also fairly high in these patients. Conclusion: Our study suggests that this non-invasive technique may improve perioperative risk stratification in patients undegoing non-cardiac surgery. PMID:27867455
FNA diagnostic value in patients with neck masses in two teaching hospitals in Iran.
Saatian, Minoo; Badie, Banafsheh Moradmand; Shahriari, Sogol; Fattahi, Fahimeh; Rasoolinejad, Mehrnaz
2011-01-01
The FNA (fine needle aspiration) procedure is simple, inexpensive, available and a safe method for the diagnosis of a neck mass. FNA has numerous advantages over open surgical biopsies as an initial diagnostic tool; therefore we decided to compare the accuracy of this method with open biopsy. This retrospective as well as descriptive study comparing preoperative FNA results with existing data in the Pathology Department in Bu-Ali and Amir Alam Hospitals. Our study included 100 patients with neck masses of which 22 were thyroid masses, 31 were salivary gland masses, and 47 were other masses. Age ranged from 3 years to 80 years with the mean age of 42.6 years. There were 59 men and 41 women. The Sensitivity was 72%, Specificity 87%, PPV 85%, NPV 75% and diagnostic Accuracy 79%. In this study we had also 26% false negative and 15% false positive. FNA is a valuable diagnostic tool in the management of neck masses; also it has been used for staging and planning of treatment for the wide and metastatic malignancy. This technique reduces the need for more invasive and costly procedures. According to the high sensitivity and high accuracy in this study, FNA can be used as the first step of diagnoses test in neck masses.
Aykut, Aktas; Bumin, Degirmenci; Omer, Yilmaz; Mustafa, Kayan; Meltem, Cetin; Orhan, Celik; Nisa, Unlu; Hikmet, Orhan; Hakan, Demirtas; Mert, Koroglu
2015-09-01
The aim was to compare coronary high-definition CT (HDCT) with standard-definition CT (SDCT) angiography as to radiation dose, image quality and accuracy. 28 patients with history of coronary artery disease scanned by HDCT (Discovery CT750 HD) and SDCT (Somatom Definition AS). The scan modes were both axial prospective ECG-triggered. The vessel diameters and vessel attenuation values of totally 280 measurements from 140 coronary arteries were analyzed by two experienced radiologists. All data was analyzed by intraclass correlation test. Image quality graded by motion and stair step artifacts (grade 1, poor, to grade 4, excellent), accuracy of vessel inner and outer diameters were compared between the two CT units using the independent samples t-test and Mann-Whitney U test. The intraclass correlation coefficient (ICC) of measured vessel attenuation values in SDCT between the two radiologists was exceedingly good. The ICC was higher in HDCT. The radiation dose of HDCT was higher than that of SDCT. The mean tube current was 180 (mA) in HDCT and 147(mA) in SDCT with the same tube voltage (kVp). There was no significant difference between image quality. HDCT has a higher radiation dose but has much more atenuation and the spatial resolution which improve measurement accuracy for imaging coronary arteries.
Accuracy of the Velotron ergometer and SRM power meter.
Abbiss, C R; Quod, M J; Levin, G; Martin, D T; Laursen, P B
2009-02-01
The purpose of this study was to determine the accuracy of the Velotron cycle ergometer and the SRM power meter using a dynamic calibration rig over a range of exercise protocols commonly applied in laboratory settings. These trials included two sustained constant power trials (250 W and 414 W), two incremental power trials and three high-intensity interval power trials. To further compare the two systems, 15 subjects performed three dynamic 30 km performance time trials. The Velotron and SRM displayed accurate measurements of power during both constant power trials (<1% error). However, during high-intensity interval trials the Velotron and SRM were found to be less accurate (3.0%, CI=1.6-4.5% and -2.6%, CI=-3.2--2.0% error, respectively). During the dynamic 30 km time trials, power measured by the Velotron was 3.7+/-1.9% (CI=2.9-4.8%) greater than that measured by the SRM. In conclusion, the accuracy of the Velotron cycle ergometer and the SRM power meter appears to be dependent on the type of test being performed. Furthermore, as each power monitoring system measures power at various positions (i.e. bottom bracket vs. rear wheel), caution should be taken when comparing power across the two systems, particularly when power is variable.
Fast group matching for MR fingerprinting reconstruction.
Cauley, Stephen F; Setsompop, Kawin; Ma, Dan; Jiang, Yun; Ye, Huihui; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L
2015-08-01
MR fingerprinting (MRF) is a technique for quantitative tissue mapping using pseudorandom measurements. To estimate tissue properties such as T1 , T2 , proton density, and B0 , the rapidly acquired data are compared against a large dictionary of Bloch simulations. This matching process can be a very computationally demanding portion of MRF reconstruction. We introduce a fast group matching algorithm (GRM) that exploits inherent correlation within MRF dictionaries to create highly clustered groupings of the elements. During matching, a group specific signature is first used to remove poor matching possibilities. Group principal component analysis (PCA) is used to evaluate all remaining tissue types. In vivo 3 Tesla brain data were used to validate the accuracy of our approach. For a trueFISP sequence with over 196,000 dictionary elements, 1000 MRF samples, and image matrix of 128 × 128, GRM was able to map MR parameters within 2s using standard vendor computational resources. This is an order of magnitude faster than global PCA and nearly two orders of magnitude faster than direct matching, with comparable accuracy (1-2% relative error). The proposed GRM method is a highly efficient model reduction technique for MRF matching and should enable clinically relevant reconstruction accuracy and time on standard vendor computational resources. © 2014 Wiley Periodicals, Inc.
solGS: a web-based tool for genomic selection
USDA-ARS?s Scientific Manuscript database
Genomic selection (GS) promises to improve accuracy in estimating breeding values and genetic gain for quantitative traits compared to traditional breeding methods. Its reliance on high-throughput genome-wide markers and statistical complexity, however, is a serious challenge in data management, ana...
Neural substrates of empathic accuracy in people with schizophrenia.
Harvey, Philippe-Olivier; Zaki, Jamil; Lee, Junghee; Ochsner, Kevin; Green, Michael F
2013-05-01
Empathic deficits in schizophrenia may lead to social dysfunction, but previous studies of schizophrenia have not modeled empathy through paradigms that (1) present participants with naturalistic social stimuli and (2) link brain activity to "accuracy" about inferring other's emotional states. This study addressed this gap by investigating the neural correlates of empathic accuracy (EA) in schizophrenia. Fifteen schizophrenia patients and 15 controls were scanned while continuously rating the affective state of another person shown in a series of videos (ie, targets). These ratings were compared with targets' own self-rated affect, and EA was defined as the correlation between participants' ratings and targets' self-ratings. Targets' self-reported emotional expressivity also was measured. We searched for brain regions whose activity tracked parametrically with (1) perceivers' EA and (2) targets' expressivity. Patients showed reduced EA compared with controls. The left precuneus, left middle frontal gyrus, and bilateral thalamus were significantly more correlated with EA in controls compared with patients. High expressivity in targets was associated with better EA in controls but not in patients. High expressivity was associated with increased brain activity in a large set of regions in controls (eg, fusiform gyrus, medial prefrontal cortex) but not in patients. These results use a naturalistic performance measure to confirm that schizophrenic patients demonstrate impaired ability to understand others' internal states. They provide novel evidence about a potential mechanism for this impairment: schizophrenic patients failed to capitalize on targets' emotional expressivity and also demonstrate reduced neural sensitivity to targets' affective cues.
Kam, K Y Ronald; Ong, Hon Shing; Bunce, Catey; Ogunbowale, Lola; Verma, Seema
2015-09-01
To estimate the diagnostic accuracy (sensitivity and specificity) of the AdenoPlus point-of-care adenoviral test compared to PCR in an ophthalmic accident and emergency service. These findings were compared with those of a previous study. This was a prospective diagnostic accuracy study on 121 patients presenting to an emergency eye unit with a clinical picture of acute adenoviral conjunctivitis. AdenoPlus testing was carried out on one eye of each patient and a PCR analysis was also performed on a swab taken from the same eye. AdenoPlus and PCR results were interpreted by masked personnel. Sensitivity and specificity for the AdenoPlus test were calculated using PCR results as the reference standard. 121 patients were enrolled and 109 met the inclusion criteria. 43 patients (39.4%) tested positive for adenovirus by PCR analysis. The sensitivity of the AdenoPlus swab in detecting adenovirus was 39.5% (17/43, 95% CI 26% to 54%) and specificity was 95.5% (63/66, 95% CI 87% to 98%) compared to PCR. The AdenoPlus test has a high specificity for diagnosing adenoviral conjunctivitis, but in this clinical setting, we could not reproduce the high sensitivity that has been previously published. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto; ...
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preciat Gonzalez, German A.; El Assal, Lemmer R. P.; Noronha, Alberto
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, manymore » algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.« less
Preciat Gonzalez, German A; El Assal, Lemmer R P; Noronha, Alberto; Thiele, Ines; Haraldsdóttir, Hulda S; Fleming, Ronan M T
2017-06-14
The mechanism of each chemical reaction in a metabolic network can be represented as a set of atom mappings, each of which relates an atom in a substrate metabolite to an atom of the same element in a product metabolite. Genome-scale metabolic network reconstructions typically represent biochemistry at the level of reaction stoichiometry. However, a more detailed representation at the underlying level of atom mappings opens the possibility for a broader range of biological, biomedical and biotechnological applications than with stoichiometry alone. Complete manual acquisition of atom mapping data for a genome-scale metabolic network is a laborious process. However, many algorithms exist to predict atom mappings. How do their predictions compare to each other and to manually curated atom mappings? For more than four thousand metabolic reactions in the latest human metabolic reconstruction, Recon 3D, we compared the atom mappings predicted by six atom mapping algorithms. We also compared these predictions to those obtained by manual curation of atom mappings for over five hundred reactions distributed among all top level Enzyme Commission number classes. Five of the evaluated algorithms had similarly high prediction accuracy of over 91% when compared to manually curated atom mapped reactions. On average, the accuracy of the prediction was highest for reactions catalysed by oxidoreductases and lowest for reactions catalysed by ligases. In addition to prediction accuracy, the algorithms were evaluated on their accessibility, their advanced features, such as the ability to identify equivalent atoms, and their ability to map hydrogen atoms. In addition to prediction accuracy, we found that software accessibility and advanced features were fundamental to the selection of an atom mapping algorithm in practice.
Goulart, Alessandra Carvalho; Oliveira, Ilka Regina Souza de; Alencar, Airlane Pereira; Santos, Maira Solange Camara dos; Santos, Itamar Souza; Martines, Brenda Margatho Ramos; Meireles, Danilo Peron; Martines, João Augusto dos Santos; Misciagna, Giovanni; Benseñor, Isabela Martins; Lotufo, Paulo Andrade
2015-01-01
Noninvasive strategies for evaluating non-alcoholic fatty liver disease (NAFLD) have been investigated over the last few decades. Our aim was to evaluate the diagnostic accuracy of a new hepatic ultrasound score for NAFLD in the ELSA-Brasil study. Diagnostic accuracy study conducted in the ELSA center, in the hospital of a public university. Among the 15,105 participants of the ELSA study who were evaluated for NAFLD, 195 individuals were included in this sub-study. Hepatic ultrasound was performed (deep beam attenuation, hepatorenal index and anteroposterior diameter of the right hepatic lobe) and compared with the hepatic steatosis findings from 64-channel high-resolution computed tomography (CT). We also evaluated two clinical indices relating to NAFLD: the fatty liver index (FLI) and the hepatic steatosis index (HSI). Among the 195 participants, the NAFLD frequency was 34.4%. High body mass index, high waist circumference, diabetes and hypertriglyceridemia were associated with high hepatic attenuation and large anteroposterior diameter of the right hepatic lobe, but not with the hepatorenal index. The hepatic ultrasound score, based on hepatic attenuation and the anteroposterior diameter of the right hepatic lobe, presented the best performance for NAFLD screening at the cutoff point ≥ 1 point; sensitivity: 85.1%; specificity: 73.4%; accuracy: 79.3%; and area under the curve (AUC 0.85; 95% confidence interval, CI: 0.78-0.91)]. FLI and HSI presented lower performance (AUC 0.76; 95% CI: 0.69-0.83) than CT. The hepatic ultrasound score based on hepatic attenuation and the anteroposterior diameter of the right hepatic lobe has good reproducibility and accuracy for NAFLD screening.
Stationary intraoral tomosynthesis for dental imaging
NASA Astrophysics Data System (ADS)
Inscoe, Christina R.; Wu, Gongting; Soulioti, Danai E.; Platin, Enrique; Mol, Andre; Gaalaas, Laurence R.; Anderson, Michael R.; Tucker, Andrew W.; Boyce, Sarah; Shan, Jing; Gonzales, Brian; Lu, Jianping; Zhou, Otto
2017-03-01
Despite recent advances in dental radiography, the diagnostic accuracies for some of the most common dental diseases have not improved significantly, and in some cases remain low. Intraoral x-ray is the most commonly used x-ray diagnostic tool in dental clinics. It however suffers from the typical limitations of a 2D imaging modality including structure overlap. Cone-beam computed tomography (CBCT) uses high radiation dose and suffers from image artifacts and relatively low resolution. The purpose of this study is to investigate the feasibility of developing a stationary intraoral tomosynthesis (s-IOT) using spatially distributed carbon nanotube (CNT) x-ray array technology, and to evaluate its diagnostic accuracy compared to conventional 2D intraoral x-ray. A bench-top s-IOT device was constructed using a linear CNT based X-ray source array and a digital intraoral detector. Image reconstruction was performed using an iterative reconstruction algorithm. Studies were performed to optimize the imaging configuration. For evaluation of s-IOT's diagnostic accuracy, images of a dental quality assurance phantom, and extracted human tooth specimens were acquired. Results show s-IOT increases the diagnostic sensitivity for caries compared to intraoral x-ray at a comparable dose level.
Meijer, Willemien A; Van Gerven, Pascal W; de Groot, Renate H; Van Boxtel, Martin P; Jolles, Jelle
2007-10-01
The aim of the present study was to examine whether deeper processing of words during encoding in middle-aged adults leads to a smaller increase in word-learning performance and a smaller decrease in retrieval effort than in young adults. It was also assessed whether high education attenuates age-related differences in performance. Accuracy of recall and recognition, and reaction times of recognition, after performing incidental and intentional learning tasks were compared between 40 young (25-35) and 40 middle-aged (50-60) adults with low and high educational levels. Age differences in recall increased with depth of processing, whereas age differences in accuracy and reaction times of recognition did not differ across levels. High education does not moderate age-related differences in performance. These findings suggest a smaller benefit of deep processing in middle age, when no retrieval cues are available.
NASA Technical Reports Server (NTRS)
Rapp, Richard H.
1993-01-01
The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun
2016-01-01
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Wolc, Anna; Stricker, Chris; Arango, Jesus; Settar, Petek; Fulton, Janet E; O'Sullivan, Neil P; Preisinger, Rudolf; Habier, David; Fernando, Rohan; Garrick, Dorian J; Lamont, Susan J; Dekkers, Jack C M
2011-01-21
Genomic selection involves breeding value estimation of selection candidates based on high-density SNP genotypes. To quantify the potential benefit of genomic selection, accuracies of estimated breeding values (EBV) obtained with different methods using pedigree or high-density SNP genotypes were evaluated and compared in a commercial layer chicken breeding line. The following traits were analyzed: egg production, egg weight, egg color, shell strength, age at sexual maturity, body weight, albumen height, and yolk weight. Predictions appropriate for early or late selection were compared. A total of 2,708 birds were genotyped for 23,356 segregating SNP, including 1,563 females with records. Phenotypes on relatives without genotypes were incorporated in the analysis (in total 13,049 production records).The data were analyzed with a Reduced Animal Model using a relationship matrix based on pedigree data or on marker genotypes and with a Bayesian method using model averaging. Using a validation set that consisted of individuals from the generation following training, these methods were compared by correlating EBV with phenotypes corrected for fixed effects, selecting the top 30 individuals based on EBV and evaluating their mean phenotype, and by regressing phenotypes on EBV. Using high-density SNP genotypes increased accuracies of EBV up to two-fold for selection at an early age and by up to 88% for selection at a later age. Accuracy increases at an early age can be mostly attributed to improved estimates of parental EBV for shell quality and egg production, while for other egg quality traits it is mostly due to improved estimates of Mendelian sampling effects. A relatively small number of markers was sufficient to explain most of the genetic variation for egg weight and body weight.
Huynh, Dep K; Toscano, Leanne; Phan, Vinh-An; Ow, Tsai-Wing; Schoeman, Mark; Nguyen, Nam Q
2017-06-01
This study aims to evaluate the role of unsedated, ultrathin disposable gastroscopy (TDG) against conventional gastroscopy (CG) in the screening and surveillance of gastroesophageal varices (GEVs) in patients with liver cirrhosis. Forty-eight patients (56.4 ± 1.3 years; 38 male, 10 female) with liver cirrhosis referred for screening (n = 12) or surveillance (n = 36) of GEVs were prospectively enrolled. Unsedated gastroscopy was initially performed with TDG, followed by CG with conscious sedation. The 2 gastroscopies were performed by different endoscopists blinded to the results of the previous examination. Video recordings of both gastroscopies were validated by an independent investigator in a random, blinded fashion. Endpoints were accuracy and interobserver agreement of detecting GEVs, safety, and potential cost saving. CG identified GEVs in 26 (54%) patients, 10 of whom (21%) had high-risk esophageal varices (HREV). Compared with CG, TDG had an accuracy of 92% for the detection of all GEVs, which increased to 100% for high-risk GEVs. The interobserver agreement for detecting all GEVs on TDG was 88% (κ = 0.74). This increased to 94% (κ = 0.82) for high-risk GEVs. There were no serious adverse events. Unsedated TDG is safe and has high diagnostic accuracy and interobserver reliability for the detection of GEVs. The use of clinic-based TDG would allow immediate determination of a follow-up plan, making it attractive for variceal screening and surveillance programs. (Clinical trial (ANZCTR) registration number: ACTRN12616001103459.). Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Efficacy of High Frequency Ultrasound in Localization and Characterization of Orbital Lesions
Gurushankar, G; Bhimarao; Kadakola, Bindushree
2015-01-01
Background The complicated anatomy of orbit and the wide spectrum of pathological conditions present a formidable challenge for early diagnosis, which is critical for management. Ultrasonography provides a detailed cross sectional anatomy of the entire globe with excellent topographic visualization and real time display of the moving organ. Objectives of the study To evaluate the efficacy of high frequency Ultrasound in localization of orbital diseases and to characterize various orbital pathologies sonologically. Materials and Methods Hundred eyes of 85 patients were examined with ultrasound using linear high frequency probe (5 to 17 MHz) of PHILPS IU22 ultrasound system. Sonological diagnosis was made based on location, acoustic characteristics, kinetic properties and Doppler flow dynamics. Final diagnosis was made based on clinical & laboratory findings/higher cross-sectional imaging/surgery & histopathology (as applicable). Diagnostic accuracy of ultrasonography was evaluated and compared with final diagnosis. Results The distinction between ocular and extraocular pathologies was made in 100% of cases. The overall sensitivity, specificity, NPV and accuracy of ultrasonography were 94.2%, 98.8%, 92.2% & 94.9% respectively for diagnosis of ocular pathologies and 94.2%, 99.2%, 95.9% & 95.2% respectively for extra ocular pathologies. Conclusion Ultrasonography is a readily available, simple, cost effective, non ionizing and non invasive modality with overall high diagnostic accuracy in localising and characterising orbital pathologies. It has higher spatial and temporal resolution compared to CT/MRI. However, CT/MRI may be indicated in certain cases for the evaluation of calcifications, bony involvement, extension to adjacent structures and intracranial extension. PMID:26500977
A Simple Czech and English Probabilistic Tagger: A Comparison.
ERIC Educational Resources Information Center
Hladka, Barbora; Hajic, Jan
An experiment compared the tagging of two languages: Czech, a highly inflected language with a high degree of ambiguity, and English. For Czech, the corpus was one gathered in the 1970s at the Czechoslovak Academy of Sciences; for English, it was the Wall Street Journal corpus. Results indicate 81.53 percent accuracy for Czech and 96.83 percent…
High altitude atmospheric modeling
NASA Technical Reports Server (NTRS)
Hedin, Alan E.
1988-01-01
Five empirical models were compared with 13 data sets, including both atmospheric drag-based data and mass spectrometer data. The most recently published model, MSIS-86, was found to be the best model overall with an accuracy around 15 percent. The excellent overall agreement of the mass spectrometer-based MSIS models with the drag data, including both the older data from orbital decay and the newer accelerometer data, suggests that the absolute calibration of the (ensemble of) mass spectrometers and the assumed drag coefficient in the atomic oxygen regime are consistent to 5 percent. This study illustrates a number of reasons for the current accuracy limit such as calibration accuracy and unmodeled trends. Nevertheless, the largest variations in total density in the thermosphere are accounted for, to a very high degree, by existing models. The greatest potential for improvements is in areas where we still have insufficient data (like the lower thermosphere or exosphere), where there are disagreements in technique (such as the exosphere) which can be resolved, or wherever generally more accurate measurements become available.
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Motion direction estimation based on active RFID with changing environment
NASA Astrophysics Data System (ADS)
Jie, Wu; Minghua, Zhu; Wei, He
2018-05-01
The gate system is used to estimate the direction of RFID tags carriers when they are going through the gate. Normally, it is difficult to achieve and keep a high accuracy in estimating motion direction of RFID tags because the received signal strength of tag changes sharply according to the changing electromagnetic environment. In this paper, a method of motion direction estimation for RFID tags is presented. To improve estimation accuracy, the machine leaning algorithm is used to get the fitting function of the received data by readers which are deployed inside and outside gate respectively. Then the fitted data are sampled to get the standard vector. We compare the stand vector with template vectors to get the motion direction estimation result. Then the corresponding template vector is updated according to the surrounding environment. We conducted the simulation and implement of the proposed method and the result shows that the proposed method in this work can improve and keep a high accuracy under the condition of the constantly changing environment.
Kendall, Bradley; Bellovary, Bryanne; Gothe, Neha P
2018-06-04
The purpose of this study was to assess the accuracy of energy expenditure (EE) estimation and step tracking abilities of six activity monitors (AMs) in relation to indirect calorimetry and hand counted steps and assess the accuracy of the AMs between high and low fit individuals in order to assess the impact of exercise intensity. Fifty participants wore the Basis watch, Fitbit Flex, Polar FT7, Jawbone, Omron pedometer, and Actigraph during a maximal graded treadmill test. Correlations, intra-class correlations, and t-tests determined accuracy and agreement between AMs and criterions. The results indicate that the Omron, Fitbit, and Actigraph were accurate for measuring steps while the Basis and Jawbone significantly underestimated steps. All AMs were significantly correlated with indirect calorimetry, however, no devices showed agreement (p < .05). When comparing low and high fit groups, correlations between AMs and indirect calorimetry improved for the low fit group, suggesting AMs may be better at measuring EE at lower intensity exercise.
Mapping the Daily Progression of Large Wildland Fires Using MODIS Active Fire Data
NASA Technical Reports Server (NTRS)
Veraverbeke, Sander; Sedano, Fernando; Hook, Simon J.; Randerson, James T.; Jin, Yufang; Rogers, Brendan
2013-01-01
High temporal resolution information on burned area is a prerequisite for incorporating bottom-up estimates of wildland fire emissions in regional air transport models and for improving models of fire behavior. We used the Moderate Resolution Imaging Spectroradiometer (MODIS) active fire product (MO(Y)D14) as input to a kriging interpolation to derive continuous maps of the evolution of nine large wildland fires. For each fire, local input parameters for the kriging model were defined using variogram analysis. The accuracy of the kriging model was assessed using high resolution daily fire perimeter data available from the U.S. Forest Service. We also assessed the temporal reporting accuracy of the MODIS burned area products (MCD45A1 and MCD64A1). Averaged over the nine fires, the kriging method correctly mapped 73% of the pixels within the accuracy of a single day, compared to 33% for MCD45A1 and 53% for MCD64A1.
Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision
Yang, Bingwei; Xie, Xinhao; Li, Duan
2018-01-01
Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
ExpertEyes: open-source, high-definition eyetracking.
Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas
2015-03-01
ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
Lidestam, Björn; Hällgren, Mathias; Rönnberg, Jerker
2014-01-01
This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group difference in accuracy was observed. Finally, there were no significant between-group differences in terms of IPs or accuracy for final word identification in highly predictable sentences. Our results also showed that, among EHA users, greater working memory capacity was associated with earlier IPs and improved accuracy in consonant and word identification. Together, our findings demonstrate that the gated speech perception ability of EHA users was not at the level of ENH individuals, in terms of IPs and accuracy. In addition, gated speech perception was more cognitively demanding for EHA users than for ENH individuals in the absence of semantic context. PMID:25085610
Accuracy of five intraoral scanners compared to indirect digitalization.
Güth, Jan-Frederik; Runkel, Cornelius; Beuer, Florian; Stimmelmayr, Michael; Edelhoff, Daniel; Keul, Christine
2017-06-01
Direct and indirect digitalization offer two options for computer-aided design (CAD)/ computer-aided manufacturing (CAM)-generated restorations. The aim of this study was to evaluate the accuracy of different intraoral scanners and compare them to the process of indirect digitalization. A titanium testing model was directly digitized 12 times with each intraoral scanner: (1) CS 3500 (CS), (2) Zfx Intrascan (ZFX), (3) CEREC AC Bluecam (BLU), (4) CEREC AC Omnicam (OC) and (5) True Definition (TD). As control, 12 polyether impressions were taken and the referring plaster casts were digitized indirectly with the D-810 laboratory scanner (CON). The accuracy (trueness/precision) of the datasets was evaluated by an analysing software (Geomagic Qualify 12.1) using a "best fit alignment" of the datasets with a highly accurate reference dataset of the testing model, received from industrial computed tomography. Direct digitalization using the TD showed the significant highest overall "trueness", followed by CS. Both performed better than CON. BLU, ZFX and OC showed higher differences from the reference dataset than CON. Regarding the overall "precision", the CS 3500 intraoral scanner and the True Definition showed the best performance. CON, BLU and OC resulted in significantly higher precision than ZFX did. Within the limitations of this in vitro study, the accuracy of the ascertained datasets was dependent on the scanning system. The direct digitalization was not superior to indirect digitalization for all tested systems. Regarding the accuracy, all tested intraoral scanning technologies seem to be able to reproduce a single quadrant within clinical acceptable accuracy. However, differences were detected between the tested systems.
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-06-01
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Silva, Luís; Vaz, João Rocha; Castro, Maria António; Serranho, Pedro; Cabri, Jan; Pezarat-Correia, Pedro
2015-08-01
The quantification of non-linear characteristics of electromyography (EMG) must contain information allowing to discriminate neuromuscular strategies during dynamic skills. There are a lack of studies about muscle coordination under motor constrains during dynamic contractions. In golf, both handicap (Hc) and low back pain (LBP) are the main factors associated with the occurrence of injuries. The aim of this study was to analyze the accuracy of support vector machines SVM on EMG-based classification to discriminate Hc (low and high handicap) and LBP (with and without LPB) in the main phases of golf swing. For this purpose recurrence quantification analysis (RQA) features of the trunk and the lower limb muscles were used to feed a SVM classifier. Recurrence rate (RR) and the ratio between determinism (DET) and RR showed a high discriminant power. The Hc accuracy for the swing, backswing, and downswing were 94.4±2.7%, 97.1±2.3%, and 95.3±2.6%, respectively. For LBP, the accuracy was 96.9±3.8% for the swing, and 99.7±0.4% in the backswing. External oblique (EO), biceps femoris (BF), semitendinosus (ST) and rectus femoris (RF) showed high accuracy depending on the laterality within the phase. RQA features and SVM showed a high muscle discriminant capacity within swing phases by Hc and by LBP. Low back pain golfers showed different neuromuscular coordination strategies when compared with asymptomatic. Copyright © 2015 Elsevier Ltd. All rights reserved.
A new optical head tracing reflected light for nanoprofiler
NASA Astrophysics Data System (ADS)
Okuda, K.; Okita, K.; Tokuta, Y.; Kitayama, T.; Nakano, M.; Kudo, R.; Yamamura, K.; Endo, K.
2014-09-01
High accuracy optical elements are applied in various fields. For example, ultraprecise aspherical mirrors are necessary for developing third-generation synchrotron radiation and XFEL (X-ray Free Electron LASER) sources. In order to make such high accuracy optical elements, it is necessary to realize the measurement of aspherical mirrors with high accuracy. But there has been no measurement method which simultaneously achieves these demands yet. So, we develop the nanoprofiler that can directly measure the any surfaces figures with high accuracy. The nanoprofiler gets the normal vector and the coordinate of a measurement point with using LASER and the QPD (Quadrant Photo Diode) as a detector. And, from the normal vectors and their coordinates, the three-dimensional figure is calculated. In order to measure the figure, the nanoprofiler controls its five motion axis numerically to make the reflected light enter to the QPD's center. The control is based on the sample's design formula. We measured a concave spherical mirror with a radius of curvature of 400 mm by the deflection method which calculates the figure error from QPD's output, and compared the results with those using a Fizeau interferometer. The profile was consistent within the range of system error. The deflection method can't neglect the error caused from the QPD's spatial irregularity of sensitivity. In order to improve it, we have contrived the zero method which moves the QPD by the piezoelectric motion stage and calculates the figure error from the displacement.
Properties of young massive clusters obtained with different massive-star evolutionary models
NASA Astrophysics Data System (ADS)
Wofford, Aida; Charlot, Stéphane
We undertake a comprehensive comparative test of seven widely-used spectral synthesis models using multi-band HST photometry of a sample of eight YMCs in two galaxies. We provide a first quantitative estimate of the accuracies and uncertainties of new models, show the good progress of models in fitting high-quality observations, and highlight the need of further comprehensive comparative tests.
Template-guided vs. non-guided drilling in site preparation of dental implants.
Scherer, Uta; Stoetzer, Marcus; Ruecker, Martin; Gellrich, Nils-Claudius; von See, Constantin
2015-07-01
Clinical success of oral implants is related to primary stability and osseointegration. These parameters are associated with delicate surgical techniques. We herein studied whether template-guided drilling has a significant influence on drillholes diameter and accuracy in an in vitro model. Fresh cadaveric porcine mandibles were used for drilling experiments of four experimental groups. Each group consisted of three operators, comparing guide templates for drilling with free-handed procedure. Operators without surgical knowledge were grouped together, contrasting highly experienced oral surgeons in other groups. A total of 180 drilling actions were performed, and diameters were recorded at multiple depth levels, with a precision measuring instrument. Template-guided drilling procedure improved accuracy on a very significant level in comparison with free-handed drilling operation (p ≤ 0.001). Inaccuracy of free-handed drilling became more significant in relation to measurement depth. High homogenic uniformity of template-guided drillholes was significantly stronger than unguided drilling operations by highly experienced oral surgeons (p ≤ 0.001). Template-guided drilling procedure leads to significantly enhanced accuracy. Significant results compared to free-handed drilling actions were achieved, irrespective of the clinical experience level of the operator. Template-guided drilling procedures lead to a more predictable clinical diameter. It shows that any set of instruments has to be carefully chosen to match the specific implant system. The current in vitro study is implicating an improvement of implant bed preparation but needs to be confirmed in clinical studies.
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.
Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam
2018-05-26
The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.
Marino, Miguel; Li, Yi; Rueschman, Michael N.; Winkelman, J. W.; Ellenbogen, J. M.; Solet, J. M.; Dulin, Hilary; Berkman, Lisa F.; Buxton, Orfeu M.
2013-01-01
Objectives: We validated actigraphy for detecting sleep and wakefulness versus polysomnography (PSG). Design: Actigraphy and polysomnography were simultaneously collected during sleep laboratory admissions. All studies involved 8.5 h time in bed, except for sleep restriction studies. Epochs (30-sec; n = 232,849) were characterized for sensitivity (actigraphy = sleep when PSG = sleep), specificity (actigraphy = wake when PSG = wake), and accuracy (total proportion correct); the amount of wakefulness after sleep onset (WASO) was also assessed. A generalized estimating equation (GEE) model included age, gender, insomnia diagnosis, and daytime/nighttime sleep timing factors. Setting: Controlled sleep laboratory conditions. Participants: Young and older adults, healthy or chronic primary insomniac (PI) patients, and daytime sleep of 23 night-workers (n = 77, age 35.0 ± 12.5, 30F, mean nights = 3.2). Interventions: N/A. Measurements and Results: Overall, sensitivity (0.965) and accuracy (0.863) were high, whereas specificity (0.329) was low; each was only slightly modified by gender, insomnia, day/night sleep timing (magnitude of change < 0.04). Increasing age slightly reduced specificity. Mean WASO/night was 49.1 min by PSG compared to 36.8 min/night by actigraphy (β = 0.81; CI = 0.42, 1.21), unbiased when WASO < 30 min/night, and overestimated when WASO > 30 min/night. Conclusions: This validation quantifies strengths and weaknesses of actigraphy as a tool measuring sleep in clinical and population studies. Overall, the participant-specific accuracy is relatively high, and for most participants, above 80%. We validate this finding across multiple nights and a variety of adults across much of the young to midlife years, in both men and women, in those with and without insomnia, and in 77 participants. We conclude that actigraphy is overall a useful and valid means for estimating total sleep time and wakefulness after sleep onset in field and workplace studies, with some limitations in specificity. Citation: Marino M; Li Y; Rueschman MN; Winkelman JW; Ellenbogen JM; Solet JM; Dulin H; Berkman LF; Buxton OM. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. SLEEP 2013;36(11):1747-1755. PMID:24179309
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Yick Wing, E-mail: mpr@hksh.com; Wong, Wing Kei Rebecca; Yu, Siu Ki
2012-01-01
To evaluate the accuracy in detection of small and low-contrast regions using a high-definition diagnostic computed tomography (CT) scanner compared with a radiotherapy CT simulation scanner. A custom-made phantom with cylindrical holes of diameters ranging from 2-9 mm was filled with 9 different concentrations of contrast solution. The phantom was scanned using a 16-slice multidetector CT simulation scanner (LightSpeed RT16, General Electric Healthcare, Milwaukee, WI) and a 64-slice high-definition diagnostic CT scanner (Discovery CT750 HD, General Electric Healthcare). The low-contrast regions of interest (ROIs) were delineated automatically upon their full width at half maximum of the CT number profile inmore » Hounsfield units on a treatment planning workstation. Two conformal indexes, CI{sub in}, and CI{sub out}, were calculated to represent the percentage errors of underestimation and overestimation in the automated contours compared with their actual sizes. Summarizing the conformal indexes of different sizes and contrast concentration, the means of CI{sub in} and CI{sub out} for the CT simulation scanner were 33.7% and 60.9%, respectively, and 10.5% and 41.5% were found for the diagnostic CT scanner. The mean differences between the 2 scanners' CI{sub in} and CI{sub out} were shown to be significant with p < 0.001. A descending trend of the index values was observed as the ROI size increases for both scanners, which indicates an improved accuracy when the ROI size increases, whereas no observable trend was found in the contouring accuracy with respect to the contrast levels in this study. Images acquired by the diagnostic CT scanner allow higher accuracy on size estimation compared with the CT simulation scanner in this study. We recommend using a diagnostic CT scanner to scan patients with small lesions (<1 cm in diameter) for radiotherapy treatment planning, especially for those pending for stereotactic radiosurgery in which accurate delineation of small-sized, low-contrast regions is important for dose calculation.« less
Evaluation of beef eating quality by Irish consumers.
McCarthy, S N; Henchion, M; White, A; Brandon, K; Allen, P
2017-10-01
A consumer's decision to purchase beef is strongly linked to its sensory properties and consistent eating quality is one of the most important attributes. Consumer taste panels were held according to the Meat Standards Australia guidelines and consumers scored beef according to its palatability attributes and completed a socio-demographic questionnaire. Consumers were able to distinguish between beef quality on a scale from unsatisfactory to premium with high accuracy. Premium cuts of beef scored significantly higher on all of the scales compared to poorer quality cuts. Men rated grilled beef higher on juiciness and flavour scales compared to women. Being the main purchaser of beef had no impact on rating scores. Overall the results show that consumers can judge eating quality with high accuracy. Further research is needed to determine how best to communicate inherent benefits that are not visible into extrinsic eating quality indicators, to provide the consumer with consistent indications of quality at the point of purchase. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pfammatter, Sibylle; Bonneil, Eric; Thibault, Pierre
2016-12-02
Quantitative proteomics using isobaric reagent tandem mass tags (TMT) or isobaric tags for relative and absolute quantitation (iTRAQ) provides a convenient approach to compare changes in protein abundance across multiple samples. However, the analysis of complex protein digests by isobaric labeling can be undermined by the relative large proportion of co-selected peptide ions that lead to distorted reporter ion ratios and affect the accuracy and precision of quantitative measurements. Here, we investigated the use of high-field asymmetric waveform ion mobility spectrometry (FAIMS) in proteomic experiments to reduce sample complexity and improve protein quantification using TMT isobaric labeling. LC-FAIMS-MS/MS analyses of human and yeast protein digests led to significant reductions in interfering ions, which increased the number of quantifiable peptides by up to 68% while significantly improving the accuracy of abundance measurements compared to that with conventional LC-MS/MS. The improvement in quantitative measurements using FAIMS is further demonstrated for the temporal profiling of protein abundance of HEK293 cells following heat shock treatment.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
Hwang, Sung Ho; Ham, Soo-Youn; Kang, Eun-Young; Lee, Ki Yeol
2015-01-01
Objective To evaluate the influence of high-pitch mode (HPM) in dual-source computed tomography (DSCT) on the accuracy of three-dimensional (3D) volumetry for solid pulmonary nodules. Materials and Methods A lung phantom implanted with 45 solid pulmonary nodules (n = 15 for each of 4-mm, 6-mm, and 8-mm in diameter) was scanned twice, first in conventional pitch mode (CPM) and then in HPM using DSCT. The relative percentage volume errors (RPEs) of 3D volumetry were compared between the HPM and CPM. In addition, the intermode volume variability (IVV) of 3D volumetry was calculated. Results In the measurement of the 6-mm and 8-mm nodules, there was no significant difference in RPE (p > 0.05, respectively) between the CPM and HPM (IVVs of 1.2 ± 0.9%, and 1.7 ± 1.5%, respectively). In the measurement of the 4-mm nodules, the mean RPE in the HPM (35.1 ± 7.4%) was significantly greater (p < 0.01) than that in the CPM (18.4 ± 5.3%), with an IVV of 13.1 ± 6.6%. However, the IVVs were in an acceptable range (< 25%), regardless of nodule size. Conclusion The accuracy of 3D volumetry with HPM for solid pulmonary nodule is comparable to that with CPM. However, the use of HPM may adversely affect the accuracy of 3D volumetry for smaller (< 5 mm in diameter) nodule. PMID:25995695
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi
2015-02-01
In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.
NASA Astrophysics Data System (ADS)
Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto
2014-11-01
Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.
The Evaluation of GPS techniques for UAV-based Photogrammetry in Urban Area
NASA Astrophysics Data System (ADS)
Yeh, M. L.; Chou, Y. T.; Yang, L. S.
2016-06-01
The efficiency and high mobility of Unmanned Aerial Vehicle (UAV) made them essential to aerial photography assisted survey and mapping. Especially for urban land use and land cover, that they often changes, and need UAVs to obtain new terrain data and the new changes of land use. This study aims to collect image data and three dimensional ground control points in Taichung city area with Unmanned Aerial Vehicle (UAV), general camera and Real-Time Kinematic with positioning accuracy down to centimetre. The study area is an ecological park that has a low topography which support the city as a detention basin. A digital surface model was also built with Agisoft PhotoScan, and there will also be a high resolution orthophotos. There will be two conditions for this study, with or without ground control points and both were discussed and compared for the accuracy level of each of the digital surface models. According to check point deviation estimate, the model without ground control points has an average two-dimension error up to 40 centimeter, altitude error within one meter. The GCP-free RTK-airborne approach produces centimeter-level accuracy with excellent to low risk to the UAS operators. As in the case of the model with ground control points, the accuracy of x, y, z coordinates has gone up 54.62%, 49.07%, and 87.74%, and the accuracy of altitude has improved the most.
Thomas, Cibu; Ye, Frank Q; Irfanoglu, M Okan; Modi, Pooja; Saleem, Kadharbatcha S; Leopold, David A; Pierpaoli, Carlo
2014-11-18
Tractography based on diffusion-weighted MRI (DWI) is widely used for mapping the structural connections of the human brain. Its accuracy is known to be limited by technical factors affecting in vivo data acquisition, such as noise, artifacts, and data undersampling resulting from scan time constraints. It generally is assumed that improvements in data quality and implementation of sophisticated tractography methods will lead to increasingly accurate maps of human anatomical connections. However, assessing the anatomical accuracy of DWI tractography is difficult because of the lack of independent knowledge of the true anatomical connections in humans. Here we investigate the future prospects of DWI-based connectional imaging by applying advanced tractography methods to an ex vivo DWI dataset of the macaque brain. The results of different tractography methods were compared with maps of known axonal projections from previous tracer studies in the macaque. Despite the exceptional quality of the DWI data, none of the methods demonstrated high anatomical accuracy. The methods that showed the highest sensitivity showed the lowest specificity, and vice versa. Additionally, anatomical accuracy was highly dependent upon parameters of the tractography algorithm, with different optimal values for mapping different pathways. These results suggest that there is an inherent limitation in determining long-range anatomical projections based on voxel-averaged estimates of local fiber orientation obtained from DWI data that is unlikely to be overcome by improvements in data acquisition and analysis alone.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Accuracy metrics for judging time scale algorithms
NASA Technical Reports Server (NTRS)
Douglas, R. J.; Boulanger, J.-S.; Jacques, C.
1994-01-01
Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.
The Impact of Feedback Frequency on Performance in a Novel Speech Motor Learning Task.
Lowe, Mara Steinberg; Buchwald, Adam
2017-06-22
This study investigated whether whole nonword accuracy, phoneme accuracy, and acoustic duration measures were influenced by the amount of feedback speakers without impairment received during a novel speech motor learning task. Thirty-two native English speakers completed a nonword production task across 3 time points: practice, short-term retention, and long-term retention. During practice, participants received knowledge of results feedback according to a randomly assigned schedule (100%, 50%, 20%, or 0%). Changes in nonword accuracy, phoneme accuracy, nonword duration, and initial-cluster duration were compared among feedback groups, sessions, and stimulus properties. All participants improved phoneme and whole nonword accuracy at short-term and long-term retention time points. Participants also refined productions of nonwords, as indicated by a decrease in nonword duration across sessions. The 50% group exhibited the largest reduction in duration between practice and long-term retention for nonwords with native and nonnative clusters. All speakers, regardless of feedback schedule, learned new speech motor behaviors quickly with a high degree of accuracy and refined their speech motor skills for perceptually accurate productions. Acoustic measurements may capture more subtle, subperceptual changes that may occur during speech motor learning. https://doi.org/10.23641/asha.5116324.
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Willoughby, Karen A; McAndrews, Mary Pat; Rovet, Joanne F
2014-07-01
Autobiographical memory (AM) is a highly constructive cognitive process that often contains memory errors. No study has specifically examined AM accuracy in children with abnormal development of the hippocampus, a crucial brain region for AM retrieval. Thus, the present study investigated AM accuracy in 68 typically and atypically developing children using a staged autobiographical event, the Children's Autobiographical Interview, and structural magnetic resonance imaging. The atypically developing group consisted of 17 children (HYPO) exposed during gestation to insufficient maternal thyroid hormone (TH), a critical substrate for hippocampal development, and 25 children with congenital hypothyroidism (CH), who were compared to 26 controls. Groups differed significantly in the number of accurate episodic details recalled and proportion accuracy scores, with controls having more accurate recollections of the staged event than both TH-deficient groups. Total hippocampal volumes and anterior hippocampal volumes were positively correlated with proportion accuracy scores, but not total accurate episodic details, in HYPO and CH. In addition, greater severity of TH deficiency predicted lower proportion accuracy scores in both HYPO and CH. Overall, these results indicate that children with early TH deficiency have deficits in AM accuracy and that the anterior hippocampus may play a particularly important role in accurate AM retrieval. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Classification of urban features using airborne hyperspectral data
NASA Astrophysics Data System (ADS)
Ganesh Babu, Bharath
Accurate mapping and modeling of urban environments are critical for their efficient and successful management. Superior understanding of complex urban environments is made possible by using modern geospatial technologies. This research focuses on thematic classification of urban land use and land cover (LULC) using 248 bands of 2.0 meter resolution hyperspectral data acquired from an airborne imaging spectrometer (AISA+) on 24th July 2006 in and near Terre Haute, Indiana. Three distinct study areas including two commercial classes, two residential classes, and two urban parks/recreational classes were selected for classification and analysis. Four commonly used classification methods -- maximum likelihood (ML), extraction and classification of homogeneous objects (ECHO), spectral angle mapper (SAM), and iterative self organizing data analysis (ISODATA) - were applied to each data set. Accuracy assessment was conducted and overall accuracies were compared between the twenty four resulting thematic maps. With the exception of SAM and ISODATA in a complex commercial area, all methods employed classified the designated urban features with more than 80% accuracy. The thematic classification from ECHO showed the best agreement with ground reference samples. The residential area with relatively homogeneous composition was classified consistently with highest accuracy by all four of the classification methods used. The average accuracy amongst the classifiers was 93.60% for this area. When individually observed, the complex recreational area (Deming Park) was classified with the highest accuracy by ECHO, with an accuracy of 96.80% and 96.10% Kappa. The average accuracy amongst all the classifiers was 92.07%. The commercial area with relatively high complexity was classified with the least accuracy by all classifiers. The lowest accuracy was achieved by SAM at 63.90% with 59.20% Kappa. This was also the lowest accuracy in the entire analysis. This study demonstrates the potential for using the visible and near infrared (VNIR) bands from AISA+ hyperspectral data in urban LULC classification. Based on their performance, the need for further research using ECHO and SAM is underscored. The importance incorporating imaging spectrometer data in high resolution urban feature mapping is emphasized.
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
NASA Astrophysics Data System (ADS)
Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin
2014-12-01
Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.
Digital versus conventional implant impressions for edentulous patients: accuracy outcomes.
Papaspyridakos, Panos; Gallucci, German O; Chen, Chun-Jung; Hanssen, Stijn; Naert, Ignace; Vandenberghe, Bart
2016-04-01
To compare the accuracy of digital and conventional impression techniques for completely edentulous patients and to determine the effect of different variables on the accuracy outcomes. A stone cast of an edentulous mandible with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level impressions. Digital impressions (n = 10) were taken with an intraoral optical scanner (TRIOS, 3shape, Denmark) after connecting polymer scan bodies. For the conventional polyether impressions of the master cast, a splinted and a non-splinted technique were used for implant-level and abutment-level impressions (4 cast groups, n = 10 each). Master casts and conventional impression casts were digitized with an extraoral high-resolution scanner (IScan D103i, Imetric, Courgenay, Switzerland) to obtain digital volumes. Standard tessellation language (STL) datasets from the five groups of digital and conventional impressions were superimposed with the STL dataset from the master cast to assess the 3D (global) deviations. To compare the master cast with digital and conventional impressions at the implant level, analysis of variance (ANOVA) and Scheffe's post hoc test was used, while Wilcoxon's rank-sum test was used for testing the difference between abutment-level conventional impressions. Significant 3D deviations (P < 0.001) were found between Group II (non-splinted, implant level) and control. No significant differences were found between Groups I (splinted, implant level), III (digital, implant level), IV (splinted, abutment level), and V (non-splinted, abutment level) compared with the control. Implant angulation up to 15° did not affect the 3D accuracy of implant impressions (P > 0.001). Digital implant impressions are as accurate as conventional implant impressions. The splinted, implant-level impression technique is more accurate than the non-splinted one for completely edentulous patients, whereas there was no difference in the accuracy at the abutment level. The implant angulation up to 15° did not affect the accuracy of implant impressions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Thandassery, Ragesh B; Al Kaabi, Saad; Soofi, Madiha E; Mohiuddin, Syed A; John, Anil K; Al Mohannadi, Muneera; Al Ejji, Khalid; Yakoob, Rafie; Derbala, Moutaz F; Wani, Hamidullah; Sharma, Manik; Al Dweik, Nazeeh; Butt, Mohammed T; Kamel, Yasser M; Sultan, Khaleel; Pasic, Fuad; Singh, Rajvir
2016-07-01
Many indirect noninvasive scores to predict liver fibrosis are calculated from routine blood investigations. Only limited studies have compared their efficacy head to head. We aimed to compare these scores with liver biopsy fibrosis stages in patients with chronic hepatitis C. From blood investigations of 1602 patients with chronic hepatitis C who underwent a liver biopsy before initiation of antiviral treatment, 19 simple noninvasive scores were calculated. The area under the receiver operating characteristic curves and diagnostic accuracy of each of these scores were calculated (with reference to the Scheuer staging) and compared. The mean age of the patients was 41.8±9.6 years (1365 men). The most common genotype was genotype 4 (65.6%). Significant fibrosis, advanced fibrosis, and cirrhosis were seen in 65.1%, 25.6, and 6.6% of patients, respectively. All the scores except the aspartate transaminase (AST) alanine transaminase ratio, Pohl score, mean platelet volume, fibro-alpha, and red cell distribution width to platelet count ratio index showed high predictive accuracy for the stages of fibrosis. King's score (cutoff, 17.5) showed the highest predictive accuracy for significant and advanced fibrosis. King's score, Göteborg university cirrhosis index, APRI (the AST/platelet count ratio index), and Fibrosis-4 (FIB-4) had the highest predictive accuracy for cirrhosis, with the APRI (cutoff, 2) and FIB-4 (cutoff, 3.25) showing the highest diagnostic accuracy.We derived the study score 8.5 - 0.2(albumin, g/dL) +0.01(AST, IU/L) -0.02(platelet count, 10/L), which at a cutoff of >4.7 had a predictive accuracy of 0.868 (95% confidence interval, 0.833-0.904) for cirrhosis. King's score for significant and advanced fibrosis and the APRI or FIB-4 score for cirrhosis could be the best simple indirect noninvasive scores.
CSE database: extended annotations and new recommendations for ECG software testing.
Smíšek, Radovan; Maršánová, Lucie; Němcová, Andrea; Vítek, Martin; Kozumplík, Jiří; Nováková, Marie
2017-08-01
Nowadays, cardiovascular diseases represent the most common cause of death in western countries. Among various examination techniques, electrocardiography (ECG) is still a highly valuable tool used for the diagnosis of many cardiovascular disorders. In order to diagnose a person based on ECG, cardiologists can use automatic diagnostic algorithms. Research in this area is still necessary. In order to compare various algorithms correctly, it is necessary to test them on standard annotated databases, such as the Common Standards for Quantitative Electrocardiography (CSE) database. According to Scopus, the CSE database is the second most cited standard database. There were two main objectives in this work. First, new diagnoses were added to the CSE database, which extended its original annotations. Second, new recommendations for diagnostic software quality estimation were established. The ECG recordings were diagnosed by five new cardiologists independently, and in total, 59 different diagnoses were found. Such a large number of diagnoses is unique, even in terms of standard databases. Based on the cardiologists' diagnoses, a four-round consensus (4R consensus) was established. Such a 4R consensus means a correct final diagnosis, which should ideally be the output of any tested classification software. The accuracy of the cardiologists' diagnoses compared with the 4R consensus was the basis for the establishment of accuracy recommendations. The accuracy was determined in terms of sensitivity = 79.20-86.81%, positive predictive value = 79.10-87.11%, and the Jaccard coefficient = 72.21-81.14%, respectively. Within these ranges, the accuracy of the software is comparable with the accuracy of cardiologists. The accuracy quantification of the correct classification is unique. Diagnostic software developers can objectively evaluate the success of their algorithm and promote its further development. The annotations and recommendations proposed in this work will allow for faster development and testing of classification software. As a result, this might facilitate cardiologists' work and lead to faster diagnoses and earlier treatment.
Assessment of the cPAS-based BGISEQ-500 platform for metagenomic sequencing.
Fang, Chao; Zhong, Huanzi; Lin, Yuxiang; Chen, Bing; Han, Mo; Ren, Huahui; Lu, Haorong; Luber, Jacob M; Xia, Min; Li, Wangsheng; Stein, Shayna; Xu, Xun; Zhang, Wenwei; Drmanac, Radoje; Wang, Jian; Yang, Huanming; Hammarström, Lennart; Kostic, Aleksandar D; Kristiansen, Karsten; Li, Junhua
2018-03-01
More extensive use of metagenomic shotgun sequencing in microbiome research relies on the development of high-throughput, cost-effective sequencing. Here we present a comprehensive evaluation of the performance of the new high-throughput sequencing platform BGISEQ-500 for metagenomic shotgun sequencing and compare its performance with that of 2 Illumina platforms. Using fecal samples from 20 healthy individuals, we evaluated the intra-platform reproducibility for metagenomic sequencing on the BGISEQ-500 platform in a setup comprising 8 library replicates and 8 sequencing replicates. Cross-platform consistency was evaluated by comparing 20 pairwise replicates on the BGISEQ-500 platform vs the Illumina HiSeq 2000 platform and the Illumina HiSeq 4000 platform. In addition, we compared the performance of the 2 Illumina platforms against each other. By a newly developed overall accuracy quality control method, an average of 82.45 million high-quality reads (96.06% of raw reads) per sample, with 90.56% of bases scoring Q30 and above, was obtained using the BGISEQ-500 platform. Quantitative analyses revealed extremely high reproducibility between BGISEQ-500 intra-platform replicates. Cross-platform replicates differed slightly more than intra-platform replicates, yet a high consistency was observed. Only a low percentage (2.02%-3.25%) of genes exhibited significant differences in relative abundance comparing the BGISEQ-500 and HiSeq platforms, with a bias toward genes with higher GC content being enriched on the HiSeq platforms. Our study provides the first set of performance metrics for human gut metagenomic sequencing data using BGISEQ-500. The high accuracy and technical reproducibility confirm the applicability of the new platform for metagenomic studies, though caution is still warranted when combining metagenomic data from different platforms.
Saini, V.; Riekerink, R. G. M. Olde; McClure, J. T.; Barkema, H. W.
2011-01-01
Determining the accuracy and precision of a measuring instrument is pertinent in antimicrobial susceptibility testing. This study was conducted to predict the diagnostic accuracy of the Sensititre MIC mastitis panel (Sensititre) and agar disk diffusion (ADD) method with reference to the manual broth microdilution test method for antimicrobial resistance profiling of Escherichia coli (n = 156), Staphylococcus aureus (n = 154), streptococcal (n = 116), and enterococcal (n = 31) bovine clinical mastitis isolates. The activities of ampicillin, ceftiofur, cephalothin, erythromycin, oxacillin, penicillin, the penicillin-novobiocin combination, pirlimycin, and tetracycline were tested against the isolates. Diagnostic accuracy was determined by estimating the area under the receiver operating characteristic curve; intertest essential and categorical agreements were determined as well. Sensititre and the ADD method demonstrated moderate to highly accurate (71 to 99%) and moderate to perfect (71 to 100%) predictive accuracies for 74 and 76% of the isolate-antimicrobial MIC combinations, respectively. However, the diagnostic accuracy was low for S. aureus-ceftiofur/oxacillin combinations and other streptococcus-ampicillin combinations by either testing method. Essential agreement between Sensititre automatic MIC readings and MIC readings obtained by the broth microdilution test method was 87%. Essential agreement between Sensititre automatic and manual MIC reading methods was 97%. Furthermore, the ADD test method and Sensititre MIC method exhibited 92 and 91% categorical agreement (sensitive, intermediate, resistant) of results, respectively, compared with the reference method. However, both methods demonstrated lower agreement for E. coli-ampicillin/cephalothin combinations than for Gram-positive isolates. In conclusion, the Sensititre and ADD methods had moderate to high diagnostic accuracy and very good essential and categorical agreement for most udder pathogen-antimicrobial combinations and can be readily employed in veterinary diagnostic laboratories. PMID:21270215
Gebker, Rolf; Mirelis, Jesus G; Jahnke, Cosima; Hucko, Thomas; Manka, Robert; Hamdan, Ashraf; Schnackenburg, Bernhard; Fleck, Eckart; Paetsch, Ingo
2010-09-01
The purpose of this study was to determine the influence of left ventricular (LV) hypertrophy and geometry on the diagnostic accuracy of wall motion and additional perfusion imaging during high-dose dobutamine/atropine stress magnetic resonance for the detection of coronary artery disease. Combined dobutamine stress magnetic resonance (DSMR)-wall motion and DSMR-perfusion imaging was performed in a single session in 187 patients scheduled for invasive coronary angiography. Patients were classified into 4 categories on the basis of LV mass (normal, ≤ 81 g/m(2) in men and ≤ 62 g/m(2) in women) and relative wall thickness (RWT) (normal, <0.45) as follows: normal geometry (normal mass, normal RWT), concentric remodeling (normal mass, increased RWT), concentric hypertrophy (increased mass, increased RWT), and eccentric hypertrophy (increased mass, normal RWT). Wall motion and perfusion images were interpreted sequentially, with observers blinded to other data. Significant coronary artery disease was defined as ≥ 70% stenosis. In patients with increased LV concentricity (defined by an RWT ≥ 0.45), sensitivity and accuracy of DSMR-wall motion were significantly reduced (63% and 73%, respectively; P<0.05) compared with patients without increased LV concentricity (90% and 88%, respectively; P<0.05). Although accuracy of DSMR-perfusion was higher than that of DSMR-wall motion in patients with concentric hypertrophy (82% versus 71%; P < 0.05), accuracy of DSMR-wall motion was superior to DSMR-perfusion (90% versus 85%; P < 0.05) in patients with eccentric hypertrophy. The accuracy of DSMR-wall motion is influenced by LV geometry. In patients with concentric remodeling and concentric hypertrophy, additional first-pass perfusion imaging during high-dose dobutamine stress improves the diagnostic accuracy for the detection of coronary artery disease.
Windschitl, Paul D; Rose, Jason P; Stalkfleet, Michael T; Smith, Andrew R
2008-08-01
People are often egocentric when judging their likelihood of success in competitions, leading to overoptimism about winning when circumstances are generally easy and to overpessimism when the circumstances are difficult. Yet, egocentrism might be grounded in a rational tendency to favor highly reliable information (about the self) more so than less reliable information (about others). A general theory of probability called extended support theory was used to conceptualize and assess the role of egocentrism and its consequences for the accuracy of people's optimism in 3 competitions (Studies 1-3, respectively). Also, instructions were manipulated to test whether people who were urged to avoid egocentrism would show improved or worsened accuracy in their likelihood judgments. Egocentrism was found to have a potentially helpful effect on one form of accuracy, but people generally showed too much egocentrism. Debias instructions improved one form of accuracy but had no impact on another. The advantages of using the EST framework for studying optimism and other types of judgments (e.g., comparative ability judgments) are discussed. (c) 2008 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Blake, Samantha L.; Walker, S. Hunter; Muddiman, David C.; Hinks, David; Beck, Keith R.
2011-12-01
Color Index Disperse Yellow 42 (DY42), a high-volume disperse dye for polyester, was used to compare the capabilities of the LTQ-Orbitrap XL and the LTQ-FT-ICR with respect to mass measurement accuracy (MMA), spectral accuracy, and sulfur counting. The results of this research will be used in the construction of a dye database for forensic purposes; the additional spectral information will increase the confidence in the identification of unknown dyes found in fibers at crime scenes. Initial LTQ-Orbitrap XL data showed MMAs greater than 3 ppm and poor spectral accuracy. Modification of several Orbitrap installation parameters (e.g., deflector voltage) resulted in a significant improvement of the data. The LTQ-FT-ICR and LTQ-Orbitrap XL (after installation parameters were modified) exhibited MMA ≤ 3 ppm, good spectral accuracy (χ2 values for the isotopic distribution ≤ 2), and were correctly able to ascertain the number of sulfur atoms in the compound at all resolving powers investigated for AGC targets of 5.00 × 105 and 1.00 × 106.
APFELBAUM, HENRY; PELAH, ADAR; PELI, ELI
2007-01-01
Virtual reality locomotion simulators are a promising tool for evaluating the effectiveness of vision aids to mobility for people with low vision. This study examined two factors to gain insight into the verisimilitude requirements of the test environment: the effects of treadmill walking and the suitability of using controls as surrogate patients. Ten “tunnel vision” patients with retinitis pigmentosa (RP) were tasked with identifying which side of a clearly visible obstacle their heading through the virtual environment would lead them, and were scored both on accuracy and on their distance from the obstacle when they responded. They were tested both while walking on a treadmill and while standing, as they viewed a scene representing progress through a shopping mall. Control subjects, each wearing a head-mounted field restriction to simulate the vision of a paired patient, were also tested. At wide angles of approach, controls and patients performed with a comparably high degree of accuracy, and made their choices at comparable distances from the obstacle. At narrow angles of approach, patients’ accuracy increased when walking, while controls’ accuracy decreased. When walking, both patients and controls delayed their decisions until closer to the obstacle. We conclude that a head-mounted field restriction is not sufficient for simulating tunnel vision, but that the improved performance observed for walking compared to standing suggests that a walking interface (such as a treadmill) may be essential for eliciting natural perceptually-guided behavior in virtual reality locomotion simulators. PMID:18167511
Cheng, Y; Cai, Y; Wang, Y
2014-01-01
The aim of this study was to assess the accuracy of ultrasonography in the diagnosis of chronic lateral ankle ligament injury. A total of 120 ankles in 120 patients with a clinical suspicion of chronic ankle ligament injury were examined by ultrasonography by using a 5- to 17-MHz linear array transducer before surgery. The results of ultrasonography were compared with the operative findings. There were 18 sprains and 24 partial and 52 complete tears of the anterior talofibular ligament (ATFL); 26 sprains, 27 partial and 12 complete tears of the calcaneofibular ligament (CFL); and 1 complete tear of the posterior talofibular ligament (PTFL) at arthroscopy and operation. Compared with operative findings, the sensitivity, specificity and accuracy of ultrasonography were 98.9%, 96.2% and 84.2%, respectively, for injury of the ATFL and 93.8%, 90.9% and 83.3%, respectively, for injury of the CFL. The PTFL tear was identified by ultrasonography. The accuracy of identification between acute-on-chronic and subacute-chronic patients did not differ. The accuracies of diagnosing three grades of ATFL injuries were almost the same as those of diagnosing CFL injuries. Ultrasonography provides useful information for the evaluation of patients presenting with chronic pain after ankle sprain. Intraoperative findings are the reference standard. We demonstrated that ultrasonography was highly sensitive and specific in detecting chronic lateral ligments injury of the ankle joint.
Cheng, Y; Cai, Y
2014-01-01
Objective: The aim of this study was to assess the accuracy of ultrasonography in the diagnosis of chronic lateral ankle ligament injury. Methods: A total of 120 ankles in 120 patients with a clinical suspicion of chronic ankle ligament injury were examined by ultrasonography by using a 5- to 17-MHz linear array transducer before surgery. The results of ultrasonography were compared with the operative findings. Results: There were 18 sprains and 24 partial and 52 complete tears of the anterior talofibular ligament (ATFL); 26 sprains, 27 partial and 12 complete tears of the calcaneofibular ligament (CFL); and 1 complete tear of the posterior talofibular ligament (PTFL) at arthroscopy and operation. Compared with operative findings, the sensitivity, specificity and accuracy of ultrasonography were 98.9%, 96.2% and 84.2%, respectively, for injury of the ATFL and 93.8%, 90.9% and 83.3%, respectively, for injury of the CFL. The PTFL tear was identified by ultrasonography. The accuracy of identification between acute-on-chronic and subacute–chronic patients did not differ. The accuracies of diagnosing three grades of ATFL injuries were almost the same as those of diagnosing CFL injuries. Conclusion: Ultrasonography provides useful information for the evaluation of patients presenting with chronic pain after ankle sprain. Advances in knowledge: Intraoperative findings are the reference standard. We demonstrated that ultrasonography was highly sensitive and specific in detecting chronic lateral ligments injury of the ankle joint. PMID:24352708
Apfelbaum, Henry; Pelah, Adar; Peli, Eli
2007-01-01
Virtual reality locomotion simulators are a promising tool for evaluating the effectiveness of vision aids to mobility for people with low vision. This study examined two factors to gain insight into the verisimilitude requirements of the test environment: the effects of treadmill walking and the suitability of using controls as surrogate patients. Ten "tunnel vision" patients with retinitis pigmentosa (RP) were tasked with identifying which side of a clearly visible obstacle their heading through the virtual environment would lead them, and were scored both on accuracy and on their distance from the obstacle when they responded. They were tested both while walking on a treadmill and while standing, as they viewed a scene representing progress through a shopping mall. Control subjects, each wearing a head-mounted field restriction to simulate the vision of a paired patient, were also tested. At wide angles of approach, controls and patients performed with a comparably high degree of accuracy, and made their choices at comparable distances from the obstacle. At narrow angles of approach, patients' accuracy increased when walking, while controls' accuracy decreased. When walking, both patients and controls delayed their decisions until closer to the obstacle. We conclude that a head-mounted field restriction is not sufficient for simulating tunnel vision, but that the improved performance observed for walking compared to standing suggests that a walking interface (such as a treadmill) may be essential for eliciting natural perceptually-guided behavior in virtual reality locomotion simulators.
ERIC Educational Resources Information Center
Cotreau Berube, Elyse A.
2011-01-01
The purpose of this quantitative research study was to investigate the use of rote learning in basic skills of mathematics and spelling of 12 high school students, from a career and technical high school, in an effort to learn if the pedagogy of rote fits in the frameworks of today's education. The study compared the accuracy of…
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2010-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.
Tiss, Ali; Timms, John F; Smith, Celia; Devetyarov, Dmitry; Gentry-Maharaj, Aleksandra; Camuzeaux, Stephane; Burford, Brian; Nouretdinov, Ilia; Ford, Jeremy; Luo, Zhiyuan; Jacobs, Ian; Menon, Usha; Gammerman, Alex; Cramer, Rainer
2010-12-01
Our objective was to test the performance of CA125 in classifying serum samples from a cohort of malignant and benign ovarian cancers and age-matched healthy controls and to assess whether combining information from matrix-assisted laser desorption/ionization (MALDI) time-of-flight profiling could improve diagnostic performance. Serum samples from women with ovarian neoplasms and healthy volunteers were subjected to CA125 assay and MALDI time-of-flight mass spectrometry (MS) profiling. Models were built from training data sets using discriminatory MALDI MS peaks in combination with CA125 values and tested their ability to classify blinded test samples. These were compared with models using CA125 threshold levels from 193 patients with ovarian cancer, 290 with benign neoplasm, and 2236 postmenopausal healthy controls. Using a CA125 cutoff of 30 U/mL, an overall sensitivity of 94.8% (96.6% specificity) was obtained when comparing malignancies versus healthy postmenopausal controls, whereas a cutoff of 65 U/mL provided a sensitivity of 83.9% (99.6% specificity). High classification accuracies were obtained for early-stage cancers (93.5% sensitivity). Reasons for high accuracies include recruitment bias, restriction to postmenopausal women, and inclusion of only primary invasive epithelial ovarian cancer cases. The combination of MS profiling information with CA125 did not significantly improve the specificity/accuracy compared with classifications on the basis of CA125 alone. We report unexpectedly good performance of serum CA125 using threshold classification in discriminating healthy controls and women with benign masses from those with invasive ovarian cancer. This highlights the dependence of diagnostic tests on the characteristics of the study population and the crucial need for authors to provide sufficient relevant details to allow comparison. Our study also shows that MS profiling information adds little to diagnostic accuracy. This finding is in contrast with other reports and shows the limitations of serum MS profiling for biomarker discovery and as a diagnostic tool.
Katzka, David A; Geno, Debra M; Ravi, Anupama; Smyrk, Thomas C; Lao-Sirieix, Pierre; Miremadi, Ahmed; Miramedi, Ahmed; Debiram, Irene; O'Donovan, Maria; Kita, Hirohito; Kephart, Gail M; Kryzer, Lori A; Camilleri, Michael; Alexander, Jeffrey A; Fitzgerald, Rebecca C
2015-01-01
Management of eosinophilic esophagitis (EoE) requires repeated endoscopic collection of mucosal samples to assess disease activity and response to therapy. An easier and less expensive means of monitoring of EoE is required. We compared the accuracy, safety, and tolerability of sample collection via Cytosponge (an ingestible gelatin capsule comprising compressed mesh attached to a string) with those of endoscopy for assessment of EoE. Esophageal tissues were collected from 20 patients with EoE (all with dysphagia, 15 with stricture, 13 with active EoE) via Cytosponge and then by endoscopy. Number of eosinophils/high-power field and levels of eosinophil-derived neurotoxin were determined; hematoxylin-eosin staining was performed. We compared the adequacy, diagnostic accuracy, safety, and patient preference for sample collection via Cytosponge vs endoscopy procedures. All 20 samples collected by Cytosponge were adequate for analysis. By using a cutoff value of 15 eosinophils/high power field, analysis of samples collected by Cytosponge identified 11 of the 13 individuals with active EoE (83%); additional features such as abscesses were also identified. Numbers of eosinophils in samples collected by Cytosponge correlated with those in samples collected by endoscopy (r = 0.50, P = .025). Analysis of tissues collected by Cytosponge identified 4 of the 7 patients without active EoE (57% specificity), as well as 3 cases of active EoE not identified by analysis of endoscopy samples. Including information on level of eosinophil-derived neurotoxin did not increase the accuracy of diagnosis. No complications occurred during the Cytosponge procedure, which was preferred by all patients, compared with endoscopy. In a feasibility study, the Cytosponge is a safe and well-tolerated method for collecting near mucosal specimens. Analysis of numbers of eosinophils/high-power field identified patients with active EoE with 83% sensitivity. Larger studies are needed to establish the efficacy and safety of this method of esophageal tissue collection. ClinicalTrials.gov number: NCT01585103. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Luo, Guangchun; Qin, Ke; Wang, Nan; Niu, Weina
2018-01-01
Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy. PMID:29494543
Delavarian, Mona; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Dibajnia, Parvin
2011-07-12
Automatic classification of different behavioral disorders with many similarities (e.g. in symptoms) by using an automated approach will help psychiatrists to concentrate on correct disorder and its treatment as soon as possible, to avoid wasting time on diagnosis, and to increase the accuracy of diagnosis. In this study, we tried to differentiate and classify (diagnose) 306 children with many similar symptoms and different behavioral disorders such as ADHD, depression, anxiety, comorbid depression and anxiety and conduct disorder with high accuracy. Classification was based on the symptoms and their severity. With examining 16 different available classifiers, by using "Prtools", we have proposed nearest mean classifier as the most accurate classifier with 96.92% accuracy in this research. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
AVHRR composite period selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Multitemporal satellite image datasets provide valuable information on the phenological characteristics of vegetation, thereby significantly increasing the accuracy of cover type classifications compared to single date classifications. However, the processing of these datasets can become very complex when dealing with multitemporal data combined with multispectral data. Advanced Very High Resolution Radiometer (AVHRR) biweekly composite data are commonly used to classify land cover over large regions. Selecting a subset of these biweekly composite periods may be required to reduce the complexity and cost of land cover mapping. The objective of our research was to evaluate the effect of reducing the number of composite periods and altering the spacing of those composite periods on classification accuracy. Because inter-annual variability can have a major impact on classification results, 5 years of AVHRR data were evaluated. AVHRR biweekly composite images for spectral channels 1-4 (visible, near-infrared and two thermal bands) covering the entire growing season were used to classify 14 cover types over the entire state of Colorado for each of five different years. A supervised classification method was applied to maintain consistent procedures for each case tested. Results indicate that the number of composite periods can be halved-reduced from 14 composite dates to seven composite dates-without significantly reducing overall classification accuracy (80.4% Kappa accuracy for the 14-composite data-set as compared to 80.0% for a seven-composite dataset). At least seven composite periods were required to ensure the classification accuracy was not affected by inter-annual variability due to climate fluctuations. Concentrating more composites near the beginning and end of the growing season, as compared to using evenly spaced time periods, consistently produced slightly higher classification values over the 5 years tested (average Kappa) of 80.3% for the heavy early/late case as compared to 79.0% for the alternate dataset case).
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
Analysis of Ultra High Resolution Sea Surface Temperature Level 4 Datasets
NASA Technical Reports Server (NTRS)
Wagner, Grant
2011-01-01
Sea surface temperature (SST) studies are often focused on improving accuracy, or understanding and quantifying uncertainties in the measurement, as SST is a leading indicator of climate change and represents the longest time series of any ocean variable observed from space. Over the past several decades SST has been studied with the use of satellite data. This allows a larger area to be studied with much more frequent measurements being taken than direct measurements collected aboard ship or buoys. The Group for High Resolution Sea Surface Temperature (GHRSST) is an international project that distributes satellite derived sea surface temperatures (SST) data from multiple platforms and sensors. The goal of the project is to distribute these SSTs for operational uses such as ocean model assimilation and decision support applications, as well as support fundamental SST research and climate studies. Examples of near real time applications include hurricane and fisheries studies and numerical weather forecasting. The JPL group has produced a new 1 km daily global Level 4 SST product, the Multiscale Ultrahigh Resolution (MUR), that blends SST data from 3 distinct NASA radiometers: the Moderate Resolution Imaging Spectroradiometer (MODIS), the Advanced Very High Resolution Radiometer (AVHRR), and the Advanced Microwave Scanning Radiometer ? Earth Observing System(AMSRE). This new product requires further validation and accuracy assessment, especially in coastal regions.We examined the accuracy of the new MUR SST product by comparing the high resolution version and a lower resolution version that has been smoothed to 19 km (but still gridded to 1 km). Both versions were compared to the same data set of in situ buoy temperature measurements with a focus on study regions of the oceans surrounding North and Central America as well as two smaller regions around the Gulf Stream and California coast. Ocean fronts exhibit high temperature gradients (Roden, 1976), and thus satellite data of SST can be used in the detection of these fronts. In this case, accuracy is less of a concern because the primary focus is on the spatial derivative of SST. We calculated the gradients for both versions of the MUR data set and did statistical comparisons focusing on the same regions.
Guiraldo, Ricardo D; Berger, Sandrine B; Siqueira, Ronaldo Mt; Grandi, Victor H; Lopes, Murilo B; Gonini-Júnior, Alcides; Caixeta, Rodrigo V; de Carvalho, Rodrigo V; Sinhoreti, Mário Ac
2017-04-01
This study compared the surface detail reproduction and dimensional accuracy of molds after disinfection using 2% sodium hypochlorite, 2% chlorhexidine digluconate or 0.2% peracetic acid to those of molds that were not disinfected, for four elastomeric impression materials: polysulfide (Light Bodied Permlastic), polyether (Impregum Soft), polydimethylsiloxane (Oranwash L) andpolyvinylsiloxane (Aquasil Ultra LV). The molds were prepared on a matrix by applying pressure, using a perforated metal tray. The molds were removed following polymerization and either disinfected (by soaking in one of the solutions for 15 minutes) or not disinfected. The samples were thus divided into 16 groups (n=5). Surface detail reproduction and dimensional accuracy were evaluated using optical microscopy to assess the 20-μm line over its entire 25 mm length. The dimensional accuracy results (%) were subjected to analysis of variance (ANOVA) and the means were compared by Tukey's test (a=5%). The 20-μm line was completely reproduced by all elastomeric impression materials, regardless of disinfection procedure. There was no significant difference between the control group and molds disinfected with peracetic acid for the elastomeric materials Impregum Soft (polyether) and Aquasil Ultra LV (polyvinylsiloxane). The high-level disinfectant peracetic acid would be the choice material for disinfection. Sociedad Argentina de Investigación Odontológica.
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
3D Higher Order Modeling in the BEM/FEM Hybrid Formulation
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.
2000-01-01
Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample
Empirical evidence of the importance of comparative studies of diagnostic test accuracy.
Takwoingi, Yemisi; Leeflang, Mariska M G; Deeks, Jonathan J
2013-04-02
Systematic reviews that "compare" the accuracy of 2 or more tests often include different sets of studies for each test. To investigate the availability of direct comparative studies of test accuracy and to assess whether summary estimates of accuracy differ between meta-analyses of noncomparative and comparative studies. Systematic reviews in any language from the Database of Abstracts of Reviews of Effects and the Cochrane Database of Systematic Reviews from 1994 to October 2012. 1 of 2 assessors selected reviews that evaluated at least 2 tests and identified meta-analyses that included both noncomparative studies and comparative studies. 1 of 3 assessors extracted data about review and study characteristics and test performance. 248 reviews compared test accuracy; of the 6915 studies, 2113 (31%) were comparative. Thirty-six reviews (with 52 meta-analyses) had adequate studies to compare results of noncomparative and comparative studies by using a hierarchical summary receiver-operating characteristic meta-regression model for each test comparison. In 10 meta-analyses, noncomparative studies ranked tests in the opposite order of comparative studies. A total of 25 meta-analyses showed more than a 2-fold discrepancy in the relative diagnostic odds ratio between noncomparative and comparative studies. Differences in accuracy estimates between noncomparative and comparative studies were greater than expected by chance (P < 0.001). A paucity of comparative studies limited exploration of direction in bias. Evidence derived from noncomparative studies often differs from that derived from comparative studies. Robustly designed studies in which all patients receive all tests or are randomly assigned to receive one or other of the tests should be more routinely undertaken and are preferred for evidence to guide test selection. National Institute for Health Research (United Kingdom).
NASA Astrophysics Data System (ADS)
Volten, H.; Bergwerff, J. B.; Haaima, M.; Lolkema, D. E.; Berkhout, A. J. C.; van der Hoff, G. R.; Potma, C. J. M.; Wichink Kruit, R. J.; van Pul, W. A. J.; Swart, D. P. J.
2011-08-01
We present two Differential Optical Absorption Spectroscopy (DOAS) instruments built at RIVM, the RIVM DOAS and the miniDOAS. Both instruments provide virtually interference free measurements of NH3 concentrations in the atmosphere, since they measure over an open path, without suffering from inlet problems or interference problems by ammonium aerosols dissociating on tubes or filters. They measure concentrations up to at least 200 μg m-3, have a fast response, low maintenance demands, and a high up-time. The RIVM DOAS has a high accuracy of typically 0.15 μg m-3 for ammonia over 5-min averages and over a total light path of 100 m. The miniDOAS has been developed for application in measurement networks such as the Dutch National Air Quality Monitoring Network (LML). Compared to the RIVM DOAS it has a similar accuracy, but is significantly reduced in size, costs, and handling complexity. The RIVM DOAS and miniDOAS results showed excellent agreement (R2 = 0.996) during a field measurement campaign in Vredepeel, the Netherlands. This measurement site is located in an agricultural area and is characterized by highly variable, but on average high ammonia concentrations in the air. The RIVM-DOAS and miniDOAS results were compared to the results of the AMOR instrument, a continuous-flow wet denuder system, which is currently used in the LML. Averaged over longer time spans of typically a day the (mini)DOAS and AMOR results agree reasonably well, although an offset of the AMOR values compared to the (mini)DOAS results exists. On short time scales the (mini)DOAS shows a faster response and does not show the memory effects due to inlet tubing and transport of absorption fluids encountered by the AMOR. Due to its high accuracy, high uptime, low maintenance and its open path, the (mini)DOAS shows a good potential for flux measurements by using two (or more) systems in a gradient set-up and applying the aerodynamic gradient technique.
NASA Astrophysics Data System (ADS)
Volten, H.; Bergwerff, J. B.; Haaima, M.; Lolkema, D. E.; Berkhout, A. J. C.; van der Hoff, G. R.; Potma, C. J. M.; Wichink Kruit, R. J.; van Pul, W. A. J.; Swart, D. P. J.
2012-02-01
We present two Differential Optical Absorption Spectroscopy (DOAS) instruments built at RIVM: the RIVM DOAS and the miniDOAS. Both instruments provide virtually interference-free measurements of NH3 concentrations in the atmosphere, since they measure over an open path, without suffering from inlet problems or interference problems by ammonium aerosols dissociating on tubes or filters. They measure concentrations up to at least 200 μg m-3, have a fast response, low maintenance demands, and a high up-time. The RIVM DOAS has a high accuracy of typically 0.15 μg m-3 for ammonia for 5-min averages and over a total light path of 100 m. The miniDOAS has been developed for application in measurement networks such as the Dutch National Air Quality Monitoring Network (LML). Compared to the RIVM DOAS it has a similar accuracy, but is significantly reduced in size, costs, and handling complexity. The RIVM DOAS and miniDOAS results showed excellent agreement (R2 = 0.996) during a field measurement campaign in Vredepeel, the Netherlands. This measurement site is located in an agricultural area and is characterized by highly variable, but on average high ammonia concentrations in the air. The RIVM-DOAS and miniDOAS results were compared to the results of the AMOR instrument, a continuous-flow wet denuder system, which is currently used in the LML. Averaged over longer time spans of typically a day, the (mini)DOAS and AMOR results agree reasonably well, although an offset of the AMOR values compared to the (mini)DOAS results exists. On short time scales, the (mini)DOAS shows a faster response and does not show the memory effects due to inlet tubing and transport of absorption fluids encountered by the AMOR. Due to its high accuracy, high uptime, low maintenance and its open path, the (mini)DOAS shows a good potential for flux measurements by using two (or more) systems in a gradient set-up and applying the aerodynamic gradient technique.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
NASA Astrophysics Data System (ADS)
Tseng, Chien-Hsun
2018-06-01
This paper aims to develop a multidimensional wave digital filtering network for predicting static and dynamic behaviors of composite laminate based on the FSDT. The resultant network is, thus, an integrated platform that can perform not only the free vibration but also the bending deflection of moderate thick symmetric laminated plates with low plate side-to-thickness ratios (< = 20). Safeguarded by the Courant-Friedrichs-Levy stability condition with the least restriction in terms of optimization technique, the present method offers numerically high accuracy, stability and efficiency to proceed a wide range of modulus ratios for the FSDT laminated plates. Instead of using a constant shear correction factor (SCF) with a limited numerical accuracy for the bending deflection, an optimum SCF is particularly sought by looking for a minimum ratio of change in the transverse shear energy. This way, it can predict as good results in terms of accuracy for certain cases of bending deflection. Extensive simulation results carried out for the prediction of maximum bending deflection have demonstratively proven that the present method outperforms those based on the higher-order shear deformation and layerwise plate theories. To the best of our knowledge, this is the first work that shows an optimal selection of SCF can significantly increase the accuracy of FSDT-based laminates especially compared to the higher order theory disclaiming any correction. The highest accuracy of overall solution is compared to the 3D elasticity equilibrium one.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE
2009-01-01
Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Zhou, Guanwu; Zhao, Yulong; Guo, Fangfang; Xu, Wenju
2014-01-01
Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM) as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU) after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system's performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor. PMID:25006998
Accuracy of electronic implant torque controllers following time in clinical service.
Mitrani, R; Nicholls, J I; Phillips, K M; Ma, T
2001-01-01
Tightening of the screws in implant-supported restorations has been reported to be problematic, in that if the applied torque is too low, screw loosening occurs. If the torque is too high, then screw fracture can take place. Thus, accuracy of the torque driver is of the utmost importance. This study evaluated 4 new electronic torque drivers (controls) and 10 test electronic torque drivers, which had been in clinical service for a minimum of 5 years. Torque values of the test drivers were measured and were compared with the control values using a 1-way analysis of variance. Torque delivery accuracy was measured using a technique that simulated the clinical situation. In vivo, the torque driver turns the screw until the selected tightening torque is reached. In this laboratory experiment, an implant, along with an attached abutment and abutment gold screw, was held firmly in a Tohnichi torque gauge. Calibration accuracy for the Tohnichi is +/- 3% of the scale value. During torque measurement, the gold screw turned a minimum of 180 degrees before contact was made between the screw and abutment. Three torque values (10, 20, and 32 N-cm) were evaluated, at both high- and low-speed settings. The recorded torque measurements indicated that the 10 test electronic torque drivers maintained a torque delivery accuracy equivalent to the 4 new (unused) units. Judging from the torque output values obtained from the 10 test units, the clinical use of the electronic torque driver suggests that accuracy did not change significantly over the 5-year period of clinical service.
Wilk, Brian L
2015-01-01
Over the course of the past two to three decades, intraoral digital impression systems have gained acceptance due to high accuracy and ease of use as they have been incorporated into the fabrication of dental implant restorations. The use of intraoral digital impressions enables the clinician to produce accurate restorations without the unpleasant aspects of traditional impression materials and techniques. This article discusses the various types of digital impression systems and their accuracy compared to traditional impression techniques. The cost, time, and patient satisfaction components of both techniques will also be reviewed.
The effect of bandwidth on filter instrument total ozone accuracy
NASA Technical Reports Server (NTRS)
Basher, R. E.
1977-01-01
The effect of the width and shape of the New Zealand filter instrument's passbands on measured total-ozone accuracy is determined using a numerical model of the spectral measurement process. The model enables the calculation of corrections for the 'bandwidth-effect' error and shows that highly attenuating passband skirts and well-suppressed leakage bands are at least as important as narrow half-bandwidths. Over typical ranges of airmass and total ozone, the range in the bandwidth-effect correction is about 2% in total ozone for the filter instrument, compared with about 1% for the Dobson instrument.
A novel ultra-wideband 80 GHz FMCW radar system for contactless monitoring of vital signs.
Wang, Siying; Pohl, Antje; Jaeschke, Timo; Czaplik, Michael; Köny, Marcus; Leonhardt, Steffen; Pohl, Nils
2015-01-01
In this paper an ultra-wideband 80 GHz FMCW-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ECG and CO(2) measurements as reference. The novel FMCW-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy. This high accuracy is achieved with a large bandwidth of 10 GHz and the combination of intermediate frequency and phase evaluation. This concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations.
Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.
Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae
2017-01-01
Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.
Two-screen single-shot electron spectrometer for laser wakefield accelerated electron beams.
Soloviev, A A; Starodubtsev, M V; Burdonov, K F; Kostyukov, I Yu; Nerush, E N; Shaykin, A A; Khazanov, E A
2011-04-01
The laser wakefield acceleration electron beams can essentially deviate from the axis of the system, which distinguishes them greatly from beams of conventional accelerators. In case of energy measurements by means of a permanent-magnet electron spectrometer, the deviation angle can affect accuracy, especially for high energies. A two-screen single-shot electron spectrometer that correctly allows for variations of the angle of entry is considered. The spectrometer design enables enhancing accuracy of measuring narrow electron beams significantly as compared to a one-screen spectrometer with analogous magnetic field, size, and angular acceptance. © 2011 American Institute of Physics
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
Freire, Analía Verónica; Escobar, María Eugenia; Gryngarten, Mirta Graciela; Arcari, Andrea Josefina; Ballerini, María Gabriela; Bergadá, Ignacio; Ropelato, María Gabriela
2013-03-01
The GnRH test is the gold standard to confirm the diagnosis of central precocious puberty (CPP); however, this compound is not always readily available. Diagnostic accuracy of subcutaneous GnRH analogues tests compared to classical GnRH test has not been reported. To evaluate the diagnostic accuracy of Triptorelin test (index test) compared to the GnRH test (reference test) in girls with suspicion of CPP. A prospective, case-control, randomized clinical trial was performed. CPP or precocious thelarche (PT) was diagnosed according to maximal LH response to GnRH test and clinical characteristics during follow-up. Forty-six girls with premature breast development randomly underwent two tests: (i) intravenous GnRH 100 μg, (ii) subcutaneous Triptorelin acetate (0.1 mg/m(2), to a maximum of 0.1 mg) with blood sampling at 0, 3 and 24 h for LH, FSH and estradiol ascertainment. Gonadotrophins and estradiol responses to Triptorelin test were measured by ultrasensitive assays. Clinical features were similar between CPP (n = 33) and PT (n = 13) groups. Using receiver operating characteristic curves, maximal LH response (LH-3 h) under Triptorelin test ≥ 7 IU/l by immunofluorometric assay (IFMA) or ≥ 8 IU/l by electrochemiluminescence immunoassay (ECLIA) confirmed the diagnosis of CPP with specificity of 1.00 (95% CI: 0.75-1.00) and sensitivity 0.76 (95% CI: 0.58-0.89). Considering either LH-3 h or maximal estradiol response at 24 h (cut-off value, 295 pm), maintaining the specificity at 1.00, the test sensitivity increased to 0.94 (95% CI: 0.80-0.99) and the diagnostic efficiency to 96%. The Triptorelin test had high accuracy for the differential diagnosis of CPP vs PT in girls providing a valid alternative to the classical GnRH test. This test also allowed a comprehensive evaluation of the pituitary-ovarian axis. © 2012 Blackwell Publishing Ltd.
El Shayeb, Mohamed; Topfer, Leigh-Ann; Stafinski, Tania; Pawluk, Lawrence; Menon, Devidas
2014-01-01
Background: Greater awareness of sleep-disordered breathing and rising obesity rates have fueled demand for sleep studies. Sleep testing using level 3 portable devices may expedite diagnosis and reduce the costs associated with level 1 in-laboratory polysomnography. We sought to assess the diagnostic accuracy of level 3 testing compared with level 1 testing and to identify the appropriate patient population for each test. Methods: We conducted a systematic review and meta-analysis of comparative studies of level 3 versus level 1 sleep tests in adults with suspected sleep-disordered breathing. We searched 3 research databases and grey literature sources for studies that reported on diagnostic accuracy parameters or disease management after diagnosis. Two reviewers screened the search results, selected potentially relevant studies and extracted data. We used a bivariate mixed-effects binary regression model to estimate summary diagnostic accuracy parameters. Results: We included 59 studies involving a total of 5026 evaluable patients (mostly patients suspected of having obstructive sleep apnea). Of these, 19 studies were included in the meta-analysis. The estimated area under the receiver operating characteristics curve was high, ranging between 0.85 and 0.99 across different levels of disease severity. Summary sensitivity ranged between 0.79 and 0.97, and summary specificity ranged between 0.60 and 0.93 across different apnea–hypopnea cut-offs. We saw no significant difference in the clinical management parameters between patients who underwent either test to receive their diagnosis. Interpretation: Level 3 portable devices showed good diagnostic performance compared with level 1 sleep tests in adult patients with a high pretest probability of moderate to severe obstructive sleep apnea and no unstable comorbidities. For patients suspected of having other types of sleep-disordered breathing or sleep disorders not related to breathing, level 1 testing remains the reference standard. PMID:24218531
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
Dose Accuracy and Injection Force of Different Insulin Glargine Pens
Friedrichs, Arnd; Bohnet, Janine; Korger, Volker; Adler, Steffen; Schubert-Zsilavecz, Manfred; Abdel-Tawab, Mona
2013-01-01
Background Dose accuracy and injection force, representing key parameters of insulin pens, were determined for three pens delivering insulin glargine-based copies, Pen Royale (WR) and DispoPen (WD) for Glaritus® (Wockhardt) and GanLee Pen (GL) for Basalin® (Gan & Lee), compared with pens of the originator, ClikSTAR® (CS) and S o l o S TA R® (SS) for Lantus® (Sanofi) . Methods Using the weighing procedure recommended by DIN EN ISO 11608–1:2000, dose accuracy was evaluated based on nonrandomized delivery of low (5 U), mid (30 U), and high (60 U) dosage levels. Injection force was measured by dispensing the maximum dose of insulin (60 U for the GL, WR, and WD; 80 U for the SS and CS) at dose speeds of 6 and 10 U/s. Results All tested pens delivered comparable average doses within the DIN EN ISO 11608–1:2000 limits at all dosage levels. The GL revealed a higher coefficient of variation (CV) at 5 U, and the WR and WD had higher CVs at all dosage levels compared with the CS and SS. Injection force was higher for the WR, WD, and GL compared with the CS and SS at both dose speeds. In contrast to the CS and SS with an end-of-content feature, doses exceeding the remaining insulin could be dialed with the WR, GL, and WD and, apparently, dispensed with the WD. Conclusions All pens fulfilled the dose accuracy requirements defined by DIN EN ISO 11608–1:2000 standards at all three dosage levels, with the WR, WD, and GL showing higher dosage variability and injection force compared with the SS and CS. Thus, the devices that deliver insulin glargine copies show different performance characteristics compared with the originator. J Diabetes Sci Technol 2013;7(5):1346–1353 PMID:24124963
NASA Astrophysics Data System (ADS)
Zhu, Jing; Wang, Xingshu; Wang, Jun; Dai, Dongkai; Xiong, Hao
2016-10-01
Former studies have proved that the attitude error in a single-axis rotation INS/GPS integrated system tracks the high frequency component of the deflections of the vertical (DOV) with a fixed delay and tracking error. This paper analyses the influence of the nominal process noise covariance matrix Q on the tracking error as well as the response delay, and proposed a Q-adjusting technique to obtain the attitude error which can track the DOV better. Simulation results show that different settings of Q lead to different response delay and tracking error; there exists optimal Q which leads to a minimum tracking error and a comparatively short response delay; for systems with different accuracy, different Q-adjusting strategy should be adopted. In this way, the DOV estimation accuracy of using the attitude error as the observation can be improved. According to the simulation results, the DOV estimation accuracy after using the Q-adjusting technique is improved by approximate 23% and 33% respectively compared to that of the Earth Model EGM2008 and the direct attitude difference method.
Development of Neuromorphic Sift Operator with Application to High Speed Image Matching
NASA Astrophysics Data System (ADS)
Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.
2015-12-01
There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao
2015-07-01
In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Determining the end of a musical turn: Effects of tonal cues.
Hadley, Lauren V; Sturt, Patrick; Moran, Nikki; Pickering, Martin J
2018-01-01
Successful duetting requires that musicians coordinate their performance with their partners. In the case of turn-taking in improvised performance they need to be able to predict their partner's turn-end in order to accurately time their own entries. Here we investigate the cues used for accurate turn-end prediction in musical improvisations, focusing on the role of tonal structure. In a response-time task, participants more accurately determined the endings of (tonal) jazz than (non-tonal) free improvisation turns. Moreover, for the jazz improvisations, removing low frequency information (<2100Hz) - and hence obscuring the pitch relationships conveying tonality - reduced response accuracy, but removing high frequency information (>2100Hz) had no effect. Neither form of filtering affected response accuracy in the free improvisation condition. We therefore argue that tonal cues aided prediction accuracy for the jazz improvisations compared to the free improvisations. We compare our results with those from related speech research (De Ruiter et al., 2006), to draw comparisons between the structural function of tonality and linguistic syntax. Copyright © 2017. Published by Elsevier B.V.
Multi-Component Diffusion with Application To Computational Aerothermodynamics
NASA Technical Reports Server (NTRS)
Sutton, Kenneth; Gnoffo, Peter A.
1998-01-01
The accuracy and complexity of solving multicomponent gaseous diffusion using the detailed multicomponent equations, the Stefan-Maxwell equations, and two commonly used approximate equations have been examined in a two part study. Part I examined the equations in a basic study with specified inputs in which the results are applicable for many applications. Part II addressed the application of the equations in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) computational code for high-speed entries in Earth's atmosphere. The results showed that the presented iterative scheme for solving the Stefan-Maxwell equations is an accurate and effective method as compared with solutions of the detailed equations. In general, good accuracy with the approximate equations cannot be guaranteed for a species or all species in a multi-component mixture. 'Corrected' forms of the approximate equations that ensured the diffusion mass fluxes sum to zero, as required, were more accurate than the uncorrected forms. Good accuracy, as compared with the Stefan- Maxwell results, were obtained with the 'corrected' approximate equations in defining the heating rates for the three Earth entries considered in Part II.
NASA Astrophysics Data System (ADS)
Shi, C.; Gebert, F.; Gorges, C.; Kaufmann, S.; Nörtershäuser, W.; Sahoo, B. K.; Surzhykov, A.; Yerokhin, V. A.; Berengut, J. C.; Wolf, F.; Heip, J. C.; Schmidt, P. O.
2017-01-01
We measured the isotope shift in the ^2{S}_{{1}/{2}} → ^2{P}_{{3}/{2}} (D2) transition in singly ionized calcium ions using photon recoil spectroscopy. The high accuracy of the technique enables us to compare the difference between the isotope shifts of this transition to the previously measured isotopic shifts of the ^2{S}_{{1}/{2}} → ^2{P}_{{1}/{2}} (D1) line. This so-called splitting isotope shift is extracted and exhibits a clear signature of field shift contributions. From the data, we were able to extract the small difference of the field shift coefficient and mass shifts between the two transitions with high accuracy. This J-dependence is of relativistic origin and can be used to benchmark atomic structure calculations. As a first step, we use several ab initio atomic structure calculation methods to provide more accurate values for the field shift constants and their ratio. Remarkably, the high-accuracy value for the ratio of the field shift constants extracted from the experimental data is larger than all available theoretical predictions.
Wang, Z Q; Zhang, F G; Guo, J; Zhang, H K; Qin, J J; Zhao, Y; Ding, Z D; Zhang, Z X; Zhang, J B; Yuan, J H; Li, H L; Qu, J R
2017-03-21
Objective: To explore the value of 3.0 T MRI using multiple sequences (star VIBE+ BLADE) in evaluating the preoperative T staging for potentially resectable esophageal cancer (EC). Methods: Between April 2015 and March 2016, a total of 66 consecutive patients with endoscopically proven resectable EC underwent 3.0T MRI in the Affiliated Cancer Hospital of Zhengzhou University.Two independent readers were assigned a T staging on MRI according to the 7th edition of UICC-AJCC TNM Classification, the results of preoperative T staging were compared and analyzed with post-operative pathologic confirmation. Results: The MRI T staging of two readers were highly consistent with histopathological findings, and the sensitivity, specificity and accuracy of preoperative T staging MR imaging were also very high. Conclusion: 3.0 T MRI using multiple sequences is with high accuracy for patients of potentially resectable EC in T staging. The staging accuracy of T1, T2 and T3 is better than that of T4a. 3.0T MRI using multiple sequences could be used as a noninvasive imaging method for pre-operative T staging of EC.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.; Maddalon, Dal V.
1998-01-01
Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.
Diagnostic potential of Raman spectroscopy in Barrett's esophagus
NASA Astrophysics Data System (ADS)
Wong Kee Song, Louis-Michel; Molckovsky, Andrea; Wang, Kenneth K.; Burgart, Lawrence J.; Dolenko, Brion; Somorjai, Rajmund L.; Wilson, Brian C.
2005-04-01
Patients with Barrett's esophagus (BE) undergo periodic endoscopic surveillance with random biopsies in an effort to detect dysplastic or early cancerous lesions. Surveillance may be enhanced by near-infrared Raman spectroscopy (NIRS), which has the potential to identify endoscopically-occult dysplastic lesions within the Barrett's segment and allow for targeted biopsies. The aim of this study was to assess the diagnostic performance of NIRS for identifying dysplastic lesions in BE in vivo. Raman spectra (Pexc=70 mW; t=5 s) were collected from Barrett's mucosa at endoscopy using a custom-built NIRS system (λexc=785 nm) equipped with a filtered fiber-optic probe. Each probed site was biopsied for matching histological diagnosis as assessed by an expert pathologist. Diagnostic algorithms were developed using genetic algorithm-based feature selection and linear discriminant analysis, and classification was performed on all spectra with a bootstrap-based cross-validation scheme. The analysis comprised 192 samples (112 non-dysplastic, 54 low-grade dysplasia and 26 high-grade dysplasia/early adenocarcinoma) from 65 patients. Compared with histology, NIRS differentiated dysplastic from non-dysplastic Barrett's samples with 86% sensitivity, 88% specificity and 87% accuracy. NIRS identified 'high-risk' lesions (high-grade dysplasia/early adenocarcinoma) with 88% sensitivity, 89% specificity and 89% accuracy. In the present study, NIRS classified Barrett's epithelia with high and clinically-useful diagnostic accuracy.
Papaspyridakos, Panos; Hirayama, Hiroshi; Chen, Chun-Jung; Ho, Chung-Han; Chronopoulos, Vasilios; Weber, Hans-Peter
2016-09-01
The aim of this study was to assess the effect of connection type and impression technique on the accuracy of fit of implant-supported fixed complete-arch dental prostheses (IFCDPs). An edentulous mandibular cast with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level baselines. A titanium one-piece framework for an IFCDP was milled at abutment level and used for accuracy of fit measurements. Polyether impressions were made using a splinted and non-splinted technique at the implant and abutment level leading to four test groups, n = 10 each. Hence, four groups of test casts were generated. The impression accuracy was evaluated indirectly by assessing the fit of the IFCDP framework on the generated casts of the test groups, clinically and radiographically. Additionally, the control and all test casts were digitized with a high-resolution reference scanner (IScan D103i, Imetric, Courgenay, Switzerland) and standard tessellation language datasets were generated and superimposed. Potential correlations between the clinical accuracy of fit data and the data from the digital scanning were investigated. To compare the accuracy of casts of the test groups versus the control at the implant and abutment level, Fisher's exact test was used. Of the 10 casts of test group I (implant-level splint), all 10 presented with accurate clinical fit when the framework was seated on its respective cast, while only five of 10 casts of test group II (implant-level non-splint) showed adequate fit. All casts of group III (abutment-level splint) presented with accurate fit, whereas nine of 10 of the casts of test group IV (abutment-level non-splint) were accurate. Significant 3D deviations (P < 0.05) were found between group II and the control. No statistically significant differences were found between groups I, III, and IV compared with the control. Implant connection type (implant level vs. abutment level) and impression technique did affect the 3D accuracy of implant impressions only with the non-splint technique (P < 0.05). For one-piece IFCDPs, the implant-level splinted impression technique showed to be more accurate than the non-splinted approach, whereas at the abutment-level, no difference in the accuracy was found. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Comparison of physical and semi-empirical hydraulic models for flood inundation mapping
NASA Astrophysics Data System (ADS)
Tavakoly, A. A.; Afshari, S.; Omranian, E.; Feng, D.; Rajib, A.; Snow, A.; Cohen, S.; Merwade, V.; Fekete, B. M.; Sharif, H. O.; Beighley, E.
2016-12-01
Various hydraulic/GIS-based tools can be used for illustrating spatial extent of flooding for first-responders, policy makers and the general public. The objective of this study is to compare four flood inundation modeling tools: HEC-RAS-2D, Gridded Surface Subsurface Hydrologic Analysis (GSSHA), AutoRoute and Height Above the Nearest Drainage (HAND). There is a trade-off among accuracy, workability and computational demand in detailed, physics-based flood inundation models (e.g. HEC-RAS-2D and GSSHA) in contrast with semi-empirical, topography-based, computationally less expensive approaches (e.g. AutoRoute and HAND). The motivation for this study is to evaluate this trade-off and offer guidance to potential large-scale application in an operational prediction system. The models were assessed and contrasted via comparability analysis (e.g. overlapping statistics) by using three case studies in the states of Alabama, Texas, and West Virginia. The sensitivity and accuracy of physical and semi-eimpirical models in producing inundation extent were evaluated for the following attributes: geophysical characteristics (e.g. high topographic variability vs. flat natural terrain, urbanized vs. rural zones, effect of surface roughness paratermer value), influence of hydraulic structures such as dams and levees compared to unobstructed flow condition, accuracy in large vs. small study domain, effect of spatial resolution in topographic data (e.g. 10m National Elevation Dataset vs. 0.3m LiDAR). Preliminary results suggest that semi-empericial models tend to underestimate in a flat, urbanized area with controlled/managed river channel around 40% of the inundation extent compared to the physical models, regardless of topographic resolution. However, in places where there are topographic undulations, semi-empericial models attain relatively higher level of accuracy than they do in flat non-urbanized terrain.
Siddiqui, Ali A; Fein, Michael; Kowalski, Thomas E; Loren, David E; Eloubeidi, Mohamad A
2012-09-01
Prior studies have reported that the presence of prior biliary stent may interfere with EUS visualization of pancreatic tumors. We aimed to compare the influence of the biliary plastic and fully covered self-expanding metal stents (CSEMS) on the accuracy of EUS-FNA cytology in patients with solid pancreatic masses. We conducted a retrospective study evaluating 677 patients with solid pancreatic head/uncinate lesions and a previous biliary stent in whom EUS-FNA was performed. The patients were stratified into two groups: (1) those with a plastic stents and (2) those with CSEMS. Performance characteristics of EUS-FNA including the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were compared between the two groups. The frequency of obtaining an adequate cytology by EUS-FNA was similar in both the CSEMS group and the plastic stent group (97 vs. 97.1 % respectively; p = 1.0). The sensitivity, specificity, and accuracy of EUS-FNA was not significantly different between patients with CSEMS and plastic stents (96.8, 100, 100 % and 97.3, 98, 99.8 %, respectively). The negative predictive value for EUS-FNA was lower in the CSEMS group compared to the plastic stent group (66.6 vs. 78.1 % respectively; p = 0.42). There was one false-positive cytology in the plastic stent group compared to none in the CSEMS group. In a retrospective cohort trial, EUS-FNA was found to be highly accurate and safe in diagnosing patients with suspected pancreatic cancer, even in the presence of a plastic or metallic biliary stent. The presence of a stent did not contribute to a higher false-positive cytology rate.
Adequacy of Using a Three-Item Questionnaire to Determine Zygosity in Chinese Young Twins.
Ho, Connie Suk-Han; Zheng, Mo; Chow, Bonnie Wing-Yin; Wong, Simpson W L; Lim, Cadmon K P; Waye, Mary M Y
2017-03-01
The present study examined the adequacy of a three-item parent questionnaire in determining the zygosity of young Chinese twins and whether there was any association between parent response accuracy and some demographic variables. The sample consisted of 334 pairs of same-sex Chinese twins aged from 3 to 11 years. Three scoring methods, namely the summed score, logistic regression, and decision tree, were employed to evaluate parent response accuracy of twin zygosity based on single nucleotide polymorphism (SNP) information. The results showed that all three methods achieved high level of accuracy ranging from 91 to 93 % which was comparable to the accuracy rates in previous Chinese twin studies. Correlation results also showed that the higher the parents' education level or the family income was, the more likely parents were able to tell correctly that their twins are identical or fraternal. The present findings confirmed the validity of using a three-item parent questionnaire to determine twin zygosity in a Chinese school-aged twin sample.
Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control
NASA Astrophysics Data System (ADS)
Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel
2014-08-01
We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.
Leegon, Jeffrey; Aronsky, Dominik
2006-01-01
The healthcare environment is constantly changing. Probabilistic clinical decision support systems need to recognize and incorporate the changing patterns and adjust the decision model to maintain high levels of accuracy. Using data from >75,000 ED patients during a 19-month study period we examined the impact of various static and dynamic training strategies on a decision support system designed to predict hospital admission status for ED patients. Training durations ranged from 1 to 12 weeks. During the study period major institutional changes occurred that affected the system's performance level. The average area under the receiver operating characteristic curve was higher and more stable when longer training periods were used. The system showed higher accuracy when retrained an updated with more recent data as compared to static training period. To adjust for temporal trends the accuracy of decision support systems can benefit from longer training periods and retraining with more recent data.
NASA Technical Reports Server (NTRS)
Mulligan, P. J.; Gervin, J. C.; Lu, Y. C.
1985-01-01
An area bordering the Eastern Shore of the Chesapeake Bay was selected for study and classified using unsupervised techniques applied to LANDSAT-2 MSS data and several band combinations of LANDSAT-4 TM data. The accuracies of these Level I land cover classifications were verified using the Taylor's Island USGS 7.5 minute topographic map which was photointerpreted, digitized and rasterized. The the Taylor's Island map, comparing the MSS and TM three band (2 3 4) classifications, the increased resolution of TM produced a small improvement in overall accuracy of 1% correct due primarily to a small improvement, and 1% and 3%, in areas such as water and woodland. This was expected as the MSS data typically produce high accuracies for categories which cover large contiguous areas. However, in the categories covering smaller areas within the map there was generally an improvement of at least 10%. Classification of the important residential category improved 12%, and wetlands were mapped with 11% greater accuracy.
Gravity compensation in a Strapdown Inertial Navigation System to improve the attitude accuracy
NASA Astrophysics Data System (ADS)
Zhu, Jing; Wang, Jun; Wang, Xingshu; Yang, Shuai
2017-10-01
Attitude errors in a strapdown inertial navigation system due to gravity disturbances and system noises can be relatively large, although they are bound within the Schuler and the Earth rotation period. The principal objective of the investigation is to determine to what extent accurate gravity data can improve the attitude accuracy. The way the gravity disturbances affect the attitude were analyzed and compared with system noises by the analytic solution and simulation. The gravity disturbances affect the attitude accuracy by introducing the initial attitude error and the equivalent accelerometer bias. With the development of the high precision inertial devices and the application of the rotation modulation technology, the gravity disturbance cannot be neglected anymore. The gravity compensation was performed using the EGM2008 and simulations with and without accurate gravity compensation under varying navigation conditions were carried out. The results show that the gravity compensation improves the horizontal components of attitude accuracy evidently while the yaw angle is badly affected by the uncompensated gyro bias in vertical channel.
Diagnostic value of DIAGNOdent in detecting caries under composite restorations of primary molars.
Sichani, Ava Vali; Javadinejad, Shahrzad; Ghafari, Roshanak
2016-01-01
Direct observation cannot detect caries under restorations; therefore, the aim of this study was to compare the accuracy of radiographs and DIAGNOdent in detecting caries under restorations in primary teeth using histologic evaluation. A total of 74 previously extracted primary molars (37 with occlusal caries and 37 without caries) were used. Class 1 cavity preparations were made on each tooth by a single clinician and then the preparations were filled with composite resin. The accuracy of radiographs and DIAGNOdent in detecting caries was compared using histologic evaluation. The data were analyzed by SPSS version 21 using Chi-square, Mc Namara statistical tests and receiver operating characteristic curve. The significance was set at 0.05. The sensitivity and specificity for DIAGNOdent were 70.97 and 83.72, respectively. Few false negative results were observed, and the positive predictive value was high (+PV = 75.9) and the area under curve was more than 0.70 therefore making DIAGNOdenta great method for detecting caries (P = 0.0001). Two observers evaluated the radiographs and both observers had low sensitivity ( first observer: 48.39) (second observer: 51.61) and high specificity (both observers: 79.07). The +PV was lower than DIAGNOdent and the area under curve for both observers was less than 0.70. However, the difference between the two methods was not significant. DIAGNOdent showed a greater accuracy in detecting secondary caries under primary molar restorations, compared to radiographs. Although DIAGNOdent is an effective method for detecting caries under composite restorations, it is better to be used as an adjunctive method alongside other detecting procedures.
Mapping Gnss Restricted Environments with a Drone Tandem and Indirect Position Control
NASA Astrophysics Data System (ADS)
Cledat, E.; Cucci, D. A.
2017-08-01
The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.
NASA Astrophysics Data System (ADS)
Phinyomark, A.; Hu, H.; Phukpattaranont, P.; Limsakul, C.
2012-01-01
The classification of upper-limb movements based on surface electromyography (EMG) signals is an important issue in the control of assistive devices and rehabilitation systems. Increasing the number of EMG channels and features in order to increase the number of control commands can yield a high dimensional feature vector. To cope with the accuracy and computation problems associated with high dimensionality, it is commonplace to apply a processing step that transforms the data to a space of significantly lower dimensions with only a limited loss of useful information. Linear discriminant analysis (LDA) has been successfully applied as an EMG feature projection method. Recently, a number of extended LDA-based algorithms have been proposed, which are more competitive in terms of both classification accuracy and computational costs/times with classical LDA. This paper presents the findings of a comparative study of classical LDA and five extended LDA methods. From a quantitative comparison based on seven multi-feature sets, three extended LDA-based algorithms, consisting of uncorrelated LDA, orthogonal LDA and orthogonal fuzzy neighborhood discriminant analysis, produce better class separability when compared with a baseline system (without feature projection), principle component analysis (PCA), and classical LDA. Based on a 7-dimension time domain and time-scale feature vectors, these methods achieved respectively 95.2% and 93.2% classification accuracy by using a linear discriminant classifier.
Bhoobalan, Shanmugasundaram; Chakravartty, Riddhika; Dolbear, Gill; Al-Janabi, Mazin
2013-10-01
Aim of the study was to determine the accuracy of the clinical pretest probability (PTP) score and its association with lung ventilation and perfusion (VQ) scan. A retrospective analysis of 510 patients who had a lung VQ scan between 2008 and 2010 were included in the study. Out of 510 studies, the number of normal, low, and high probability VQ scans were 155 (30%), 289 (57%), and 55 (11%), respectively. A total of 103 patients underwent computed tomography pulmonary angiography (CTPA) scan in which 21 (20%) had a positive scan, 81 (79%) had a negative scan and one (1%) had an equivocal result. The rate of PE in the normal, low-probability, and high-probability scan categories were: 2 (9.5%), 10 (47.5%), and 9 (43%) respectively. A very low correlation (Pearson correlation coefficient r = 0.20) between the clinical PTP score and lung VQ scan. The area under the curve (AUC) of the clinical PTP score was 52% when compared with the CTPA results. However, the accuracy of lung VQ scan was better (AUC = 74%) when compared with CTPA scan. The clinical PTP score is unreliable on its own; however, it may still aid in the interpretation of lung VQ scan. The accuracy of the lung VQ scan was better in the assessment of underlying pulmonary embolism (PE).
Bhoobalan, Shanmugasundaram; Chakravartty, Riddhika; Dolbear, Gill; Al-Janabi, Mazin
2013-01-01
Purpose: Aim of the study was to determine the accuracy of the clinical pretest probability (PTP) score and its association with lung ventilation and perfusion (VQ) scan. Materials and Methods: A retrospective analysis of 510 patients who had a lung VQ scan between 2008 and 2010 were included in the study. Out of 510 studies, the number of normal, low, and high probability VQ scans were 155 (30%), 289 (57%), and 55 (11%), respectively. Results: A total of 103 patients underwent computed tomography pulmonary angiography (CTPA) scan in which 21 (20%) had a positive scan, 81 (79%) had a negative scan and one (1%) had an equivocal result. The rate of PE in the normal, low-probability, and high-probability scan categories were: 2 (9.5%), 10 (47.5%), and 9 (43%) respectively. A very low correlation (Pearson correlation coefficient r = 0.20) between the clinical PTP score and lung VQ scan. The area under the curve (AUC) of the clinical PTP score was 52% when compared with the CTPA results. However, the accuracy of lung VQ scan was better (AUC = 74%) when compared with CTPA scan. Conclusion: The clinical PTP score is unreliable on its own; however, it may still aid in the interpretation of lung VQ scan. The accuracy of the lung VQ scan was better in the assessment of underlying pulmonary embolism (PE). PMID:24379532
Combination probes for stagnation pressure and temperature measurements in gas turbine engines
NASA Astrophysics Data System (ADS)
Bonham, C.; Thorpe, S. J.; Erlund, M. N.; Stevenson, R. J.
2018-01-01
During gas turbine engine testing, steady-state gas-path stagnation pressures and temperatures are measured in order to calculate the efficiencies of the main components of turbomachinery. These measurements are acquired using fixed intrusive probes, which are installed at the inlet and outlet of each component at discrete point locations across the gas-path. The overall uncertainty in calculated component efficiency is sensitive to the accuracy of discrete point pressures and temperatures, as well as the spatial sampling across the gas-path. Both of these aspects of the measurement system must be considered if more accurate component efficiencies are to be determined. High accuracy has become increasingly important as engine manufacturers have begun to pursue small gains in component performance, which require efficiencies to be resolved to within less than ± 1% . This article reports on three new probe designs that have been developed in a response to this demand. The probes adopt a compact combination arrangement that facilitates up to twice the spatial coverage compared to individual stagnation pressure and temperature probes. The probes also utilise novel temperature sensors and high recovery factor shield designs that facilitate improvements in point measurement accuracy compared to standard Kiel probes used in engine testing. These changes allow efficiencies to be resolved within ± 1% over a wider range of conditions than is currently achievable with Kiel probes.
Silvatti, Amanda P; Cerveri, Pietro; Telles, Thiago; Dias, Fábio A S; Baroni, Guido; Barros, Ricardo M L
2013-01-01
In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm - 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.
ERIC Educational Resources Information Center
Folsom, Burton; Leef, George; Mateer, Dirk
This study examined 16 high school economics textbooks commonly used in Michigan. The textbooks were graded for 12 criteria that form the basis for the sound study of economics: (1) the price system and production; (2) competition and monopoly; (3) comparative economic systems; (4) the distribution of income and poverty; (5) the role of…
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Dall'Ara, E; Barber, D; Viceconti, M
2014-09-22
The accurate measurement of local strain is necessary to study bone mechanics and to validate micro computed tomography (µCT) based finite element (FE) models at the tissue scale. Digital volume correlation (DVC) has been used to provide a volumetric estimation of local strain in trabecular bone sample with a reasonable accuracy. However, nothing has been reported so far for µCT based analysis of cortical bone. The goal of this study was to evaluate accuracy and precision of a deformable registration method for prediction of local zero-strains in bovine cortical and trabecular bone samples. The accuracy and precision were analyzed by comparing scans virtually displaced, repeated scans without any repositioning of the sample in the scanner and repeated scans with repositioning of the samples. The analysis showed that both precision and accuracy errors decrease with increasing the size of the region analyzed, by following power laws. The main source of error was found to be the intrinsic noise of the images compared to the others investigated. The results, once extrapolated for larger regions of interest that are typically used in the literature, were in most cases better than the ones previously reported. For a nodal spacing equal to 50 voxels (498 µm), the accuracy and precision ranges were 425-692 µε and 202-394 µε, respectively. In conclusion, it was shown that the proposed method can be used to study the local deformation of cortical and trabecular bone loaded beyond yield, if a sufficiently high nodal spacing is used. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bourier, Felix; Hessling, Gabriele; Ammar-Busch, Sonia; Kottmaier, Marc; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Semmler, Verena; Lennerz, Carsten; Schneider, Christine; Kolb, Christof; Deisenhofer, Isabel; Reents, Tilko
2016-03-01
Contact-force (CF) sensing catheters are increasingly used in clinical electrophysiological practice due to their efficacy and safety profile. As data about the accuracy of this technology are scarce, we sought to quantify accuracy based on in vitro experiments. A custom-made force sensor was constructed that allowed exact force reference measurements registered via a flexible membrane. A Smarttouch Surround Flow (ST SF) ablation catheter (Biosense Webster, Diamond Bar, CA, USA) was brought in contact with the membrane of the force sensor in order to compare the ST SF force measurements to force sensor reference measurements. ST SF force sensing technology is based on deflection registration between the distal and proximal catheter tip. The experiment was repeated for n = 10 ST SF catheters, which showed no significant difference in accuracy levels. A series of measurements (n = 1200) was carried out for different angles of force acting to the catheter tip (0°/perpendicular contact, 30°, 60°, 90°/parallel contact). The mean absolute differences between reference and ST SF measurements were 1.7 ± 1.8 g (0°), 1.6 ± 1.2 g (30°), 1.4 ± 1.3 g (60°), and 6.6 ± 5.9 g (90°). Measurement accuracy was significantly higher in non-parallel contact when compared with parallel contact (P < 0.01). Catheter force measurements using the ST SF catheters show a high level of accuracy regarding differences to reference measurements and reproducibility. The reduced accuracy in measurements of 90° acting forces (parallel contact) might be clinically important when creating, for example, linear lesions. © 2015 Wiley Periodicals, Inc.
2011-01-01
Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG) signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2) values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS) was shown to have high isometric torque estimation accuracy combined with very short training times. PMID:21943179
Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.
2009-01-01
Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.
Mehrban, Hossein; Lee, Deuk Hwan; Moradi, Mohammad Hossein; IlCho, Chung; Naserkheil, Masoumeh; Ibáñez-Escriche, Noelia
2017-01-04
Hanwoo beef is known for its marbled fat, tenderness, juiciness and characteristic flavor, as well as for its low cholesterol and high omega 3 fatty acid contents. As yet, there has been no comprehensive investigation to estimate genomic selection accuracy for carcass traits in Hanwoo cattle using dense markers. This study aimed at evaluating the accuracy of alternative statistical methods that differed in assumptions about the underlying genetic model for various carcass traits: backfat thickness (BT), carcass weight (CW), eye muscle area (EMA), and marbling score (MS). Accuracies of direct genomic breeding values (DGV) for carcass traits were estimated by applying fivefold cross-validation to a dataset including 1183 animals and approximately 34,000 single nucleotide polymorphisms (SNPs). Accuracies of BayesC, Bayesian LASSO (BayesL) and genomic best linear unbiased prediction (GBLUP) methods were similar for BT, EMA and MS. However, for CW, DGV accuracy was 7% higher with BayesC than with BayesL and GBLUP. The increased accuracy of BayesC, compared to GBLUP and BayesL, was maintained for CW, regardless of the training sample size, but not for BT, EMA, and MS. Genome-wide association studies detected consistent large effects for SNPs on chromosomes 6 and 14 for CW. The predictive performance of the models depended on the trait analyzed. For CW, the results showed a clear superiority of BayesC compared to GBLUP and BayesL. These findings indicate the importance of using a proper variable selection method for genomic selection of traits and also suggest that the genetic architecture that underlies CW differs from that of the other carcass traits analyzed. Thus, our study provides significant new insights into the carcass traits of Hanwoo cattle.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Frequency domain laser velocimeter signal processor: A new signal processing scheme
NASA Technical Reports Server (NTRS)
Meyers, James F.; Clemmons, James I., Jr.
1987-01-01
A new scheme for processing signals from laser velocimeter systems is described. The technique utilizes the capabilities of advanced digital electronics to yield a smart instrument that is able to configure itself, based on the characteristics of the input signals, for optimum measurement accuracy. The signal processor is composed of a high-speed 2-bit transient recorder for signal capture and a combination of adaptive digital filters with energy and/or zero crossing detection signal processing. The system is designed to accept signals with frequencies up to 100 MHz with standard deviations up to 20 percent of the average signal frequency. Results from comparative simulation studies indicate measurement accuracies 2.5 times better than with a high-speed burst counter, from signals with as few as 150 photons per burst.
NASA Technical Reports Server (NTRS)
Frey, Bradley J.; Leviton, Douglas B.
2004-01-01
The optical designs of future NASA infrared (IR) missions and instruments, such as the James Webb Space Telescope's (JWST) Near-Mixed Camera (NIRCam), will rely on accurate knowledge of the index of refraction of various IR optical materials at cryogenic temperatures. To meet this need, we have developed a Cryogenic, High-Accuracy Refraction Measuring System (CHARMS). In this paper we discuss the completion of the design and construction of CHARMS as well as the engineering details that constrained the final design and hardware implementation. In addition, we will present our first light, cryogenic, IR index of refraction data for LiF, BaF2, and CaF2, and compare our results to previously published data for these materials.
A real-time spectral mapper as an emerging diagnostic technology in biomedical sciences.
Epitropou, George; Kavvadias, Vassilis; Iliou, Dimitris; Stathopoulos, Efstathios; Balas, Costas
2013-01-01
Real time spectral imaging and mapping at video rates can have tremendous impact not only on diagnostic sciences but also on fundamental physiological problems. We report the first real-time spectral mapper based on the combination of snap-shot spectral imaging and spectral estimation algorithms. Performance evaluation revealed that six band imaging combined with the Wiener algorithm provided high estimation accuracy, with error levels lying within the experimental noise. High accuracy is accompanied with much faster, by 3 orders of magnitude, spectral mapping, as compared with scanning spectral systems. This new technology is intended to enable spectral mapping at nearly video rates in all kinds of dynamic bio-optical effects as well as in applications where the target-probe relative position is randomly and fast changing.
CoLiTec software - detection of the near-zero apparent motion
NASA Astrophysics Data System (ADS)
Khlamov, Sergii V.; Savanevych, Vadym E.; Briukhovetskyi, Olexandr B.; Pohorelov, Artem V.
2017-06-01
In this article we described CoLiTec software for full automated frames processing. CoLiTec software allows processing the Big Data of observation results as well as processing of data that is continuously formed during observation. The scope of solving tasks includes frames brightness equalization, moving objects detection, astrometry, photometry, etc. Along with the high efficiency of Big Data processing CoLiTec software also ensures high accuracy of data measurements. A comparative analysis of the functional characteristics and positional accuracy was performed between CoLiTec and Astrometrica software. The benefits of CoLiTec used with wide field and low quality frames were observed. The efficiency of the CoLiTec software was proved by about 700.000 observations and over 1.500 preliminary discoveries.
Quamrun, Masuda; Mamoon, Rashid; Nasheed, Shams; Randy, Mullins
2014-01-01
The compounding and evaluation of ondansetron hydrochloride dihydrate topical gel, 2.5% w/w, were conducted in this study. The gelling agent was Carbopol 940. Ethanol 70% in purified water was used to dissolve the drug and disperse the gelling agent. A gel was formed by adding drops of 0.1 N sodium hydroxide solution. To assay this gel, we developed a simple and reproducible stability--indicating high-performance liquid chromatographic method. This method was validated for specificity, accuracy, and precision. The compounded gel was assayed in triplicate, and the average recovery was 98.3%. Ondansetron marketed products were analyzed for comparison with the compounded formulation. Assay, accuracy, and precision data of the compounded topical gel were comparable to the marketed products.
Zhang, Le; Lawson, Ken; Yeung, Bernice; Wypych, Jette
2015-01-06
A purity method based on capillary zone electrophoresis (CZE) has been developed for the separation of isoforms of a highly glycosylated protein. The separation was found to be driven by the number of sialic acids attached to each isoform. The method has been characterized using orthogonal assays and shown to have excellent specificity, precision and accuracy. We have demonstrated the CZE method is a useful in-process assay to support cell culture and purification development of this glycoprotein. Compared to isoelectric focusing (IEF), the CZE method provides more quantitative results and higher sample throughput with excellent accuracy, qualities that are required for process development. In addition, the CZE method has been applied in the stability testing of purified glycoprotein samples.
Lyons, Mark; Al-Nakeeb, Yahya; Nevill, Alan
2006-01-01
Despite the acknowledged importance of fatigue on performance in sport, ecologically sound studies investigating fatigue and its effects on sport-specific skills are surprisingly rare. The aim of this study was to investigate the effect of moderate and high intensity total body fatigue on passing accuracy in expert and novice basketball players. Ten novice basketball players (age: 23.30 ± 1.05 yrs) and ten expert basketball players (age: 22.50 ± 0.41 yrs) volunteered to participate in the study. Both groups performed the modified AAHPERD Basketball Passing Test under three different testing conditions: rest, moderate intensity and high intensity total body fatigue. Fatigue intensity was established using a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant (F 2,36 = 5.252, p = 0.01) level of fatigue by level of skill interaction. On examination of the mean scores it is clear that following high intensity total body fatigue there is a significant detriment in the passing performance of both novice and expert basketball players when compared to their resting scores. Fundamentally however, the detrimental impact of fatigue on passing performance is not as steep in the expert players compared to the novice players. The results suggest that expert or skilled players are better able to cope with both moderate and high intensity fatigue conditions and maintain a higher level of performance when compared to novice players. The findings of this research therefore, suggest the need for trainers and conditioning coaches in basketball to include moderate, but particularly high intensity exercise into their skills sessions. This specific training may enable players at all levels of the game to better cope with the demands of the game on court and maintain a higher standard of play. Key Points Aim: to investigate the effect of moderate and high intensity total body fatigue on basketball-passing accuracy in expert and novice basketball players. Fatigue intensity was set as a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant level of fatigue by level of skill interaction. Despite a significant detriment in passing-performance in both novice and expert players following high intensity total body fatigue, this detriment was not as steep in the expert players when compared to the novice players PMID:24259994
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Shea, Tuathan P., E-mail: tuathan.oshea@icr.ac.uk; Bamber, Jeffrey C.; Harris, Emma J.
Purpose: Ultrasound-based motion estimation is an expanding subfield of image-guided radiation therapy. Although ultrasound can detect tissue motion that is a fraction of a millimeter, its accuracy is variable. For controlling linear accelerator tracking and gating, ultrasound motion estimates must remain highly accurate throughout the imaging sequence. This study presents a temporal regularization method for correlation-based template matching which aims to improve the accuracy of motion estimates. Methods: Liver ultrasound sequences (15–23 Hz imaging rate, 2.5–5.5 min length) from ten healthy volunteers under free breathing were used. Anatomical features (blood vessels) in each sequence were manually annotated for comparison withmore » normalized cross-correlation based template matching. Five sequences from a Siemens Acuson™ scanner were used for algorithm development (training set). Results from incremental tracking (IT) were compared with a temporal regularization method, which included a highly specific similarity metric and state observer, known as the α–β filter/similarity threshold (ABST). A further five sequences from an Elekta Clarity™ system were used for validation, without alteration of the tracking algorithm (validation set). Results: Overall, the ABST method produced marked improvements in vessel tracking accuracy. For the training set, the mean and 95th percentile (95%) errors (defined as the difference from manual annotations) were 1.6 and 1.4 mm, respectively (compared to 6.2 and 9.1 mm, respectively, for IT). For each sequence, the use of the state observer leads to improvement in the 95% error. For the validation set, the mean and 95% errors for the ABST method were 0.8 and 1.5 mm, respectively. Conclusions: Ultrasound-based motion estimation has potential to monitor liver translation over long time periods with high accuracy. Nonrigid motion (strain) and the quality of the ultrasound data are likely to have an impact on tracking performance. A future study will investigate spatial uniformity of motion and its effect on the motion estimation errors.« less
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
A Novel Gravity Compensation Method for High Precision Free-INS Based on “Extreme Learning Machine”
Zhou, Xiao; Yang, Gongliu; Cai, Qingzhong; Wang, Jing
2016-01-01
In recent years, with the emergency of high precision inertial sensors (accelerometers and gyros), gravity compensation has become a major source influencing the navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper presents preliminary results concerning the effect of gravity disturbance on INS. Meanwhile, this paper proposes a novel gravity compensation method for high-precision INS, which estimates the gravity disturbance on the track using the extreme learning machine (ELM) method based on measured gravity data on the geoid and processes the gravity disturbance to the height where INS has an upward continuation, then compensates the obtained gravity disturbance into the error equations of INS to restrain the INS error propagation. The estimation accuracy of the gravity disturbance data is verified by numerical tests. The root mean square error (RMSE) of the ELM estimation method can be improved by 23% and 44% compared with the bilinear interpolation method in plain and mountain areas, respectively. To further validate the proposed gravity compensation method, field experiments with an experimental vehicle were carried out in two regions. Test 1 was carried out in a plain area and Test 2 in a mountain area. The field experiment results also prove that the proposed gravity compensation method can significantly improve the positioning accuracy. During the 2-h field experiments, the positioning accuracy can be improved by 13% and 29% respectively, in Tests 1 and 2, when the navigation scheme is compensated by the proposed gravity compensation method. PMID:27916856
Integrative Chemical-Biological Read-Across Approach for Chemical Hazard Classification
Low, Yen; Sedykh, Alexander; Fourches, Denis; Golbraikh, Alexander; Whelan, Maurice; Rusyn, Ivan; Tropsha, Alexander
2013-01-01
Traditional read-across approaches typically rely on the chemical similarity principle to predict chemical toxicity; however, the accuracy of such predictions is often inadequate due to the underlying complex mechanisms of toxicity. Here we report on the development of a hazard classification and visualization method that draws upon both chemical structural similarity and comparisons of biological responses to chemicals measured in multiple short-term assays (”biological” similarity). The Chemical-Biological Read-Across (CBRA) approach infers each compound's toxicity from those of both chemical and biological analogs whose similarities are determined by the Tanimoto coefficient. Classification accuracy of CBRA was compared to that of classical RA and other methods using chemical descriptors alone, or in combination with biological data. Different types of adverse effects (hepatotoxicity, hepatocarcinogenicity, mutagenicity, and acute lethality) were classified using several biological data types (gene expression profiling and cytotoxicity screening). CBRA-based hazard classification exhibited consistently high external classification accuracy and applicability to diverse chemicals. Transparency of the CBRA approach is aided by the use of radial plots that show the relative contribution of analogous chemical and biological neighbors. Identification of both chemical and biological features that give rise to the high accuracy of CBRA-based toxicity prediction facilitates mechanistic interpretation of the models. PMID:23848138
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.
Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun
2018-01-01
In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.
Diagnostic Accuracy of Obstructive Airway Adult Test for Diagnosis of Obstructive Sleep Apnea.
Gasparini, Giulio; Vicini, Claudio; De Benedetto, Michele; Salamanca, Fabrizio; Sorrenti, Giovanni; Romandini, Mario; Bosi, Marcello; Saponaro, Gianmarco; Foresta, Enrico; Laforì, Andreina; Meccariello, Giuseppe; Bianchi, Alessandro; Toraldo, Domenico Maurizio; Campanini, Aldo; Montevecchi, Filippo; Rizzotto, Grazia; Cervelli, Daniele; Moro, Alessandro; Arigliani, Michele; Gobbi, Riccardo; Pelo, Sandro
2015-01-01
The gold standard for the diagnosis of Obstructive Sleep Apnea (OSA) is polysomnography, whose access is however reduced by costs and limited availability, so that additional diagnostic tests are needed. To analyze the diagnostic accuracy of the Obstructive Airway Adult Test (OAAT) compared to polysomnography for the diagnosis of OSA in adult patients. Ninety patients affected by OSA verified with polysomnography (AHI ≥ 5) and ten healthy patients, randomly selected, were included and all were interviewed by one blind examiner with OAAT questions. The Spearman rho, evaluated to measure the correlation between OAAT and polysomnography, was 0.72 (p < 0.01). The area under the ROC curve (95% CI) was the parameter to evaluate the accuracy of the OAAT: it was 0.91 (0.81-1.00) for the diagnosis of OSA (AHI ≥ 5), 0.90 (0.82-0.98) for moderate OSA (AHI ≥ 15), and 0.84 (0.76-0.92) for severe OSA (AHI ≥ 30). The OAAT has shown a high correlation with polysomnography and also a high diagnostic accuracy for the diagnosis of OSA. It has also been shown to be able to discriminate among the different degrees of severity of OSA. Additional large studies aiming to validate this questionnaire as a screening or diagnostic test are needed.
Leidinger, Petra; Keller, Andreas; Milchram, Lisa; Harz, Christian; Hart, Martin; Werth, Angelika; Lenhof, Hans-Peter; Weinhäusel, Andreas; Keck, Bastian; Wullich, Bernd; Ludwig, Nicole; Meese, Eckart
2015-01-01
Although an increased level of the prostate-specific antigen can be an indication for prostate cancer, other reasons often lead to a high rate of false positive results. Therefore, an additional serological screening of autoantibodies in patients' sera could improve the detection of prostate cancer. We performed protein macroarray screening with sera from 49 prostate cancer patients, 70 patients with benign prostatic hyperplasia and 28 healthy controls and compared the autoimmune response in those groups. We were able to distinguish prostate cancer patients from normal controls with an accuracy of 83.2%, patients with benign prostatic hyperplasia from normal controls with an accuracy of 86.0% and prostate cancer patients from patients with benign prostatic hyperplasia with an accuracy of 70.3%. Combining seroreactivity pattern with a PSA level of higher than 4.0 ng/ml this classification could be improved to an accuracy of 84.1%. For selected proteins we were able to confirm the differential expression by using luminex on 84 samples. We provide a minimally invasive serological method to reduce false positive results in detection of prostate cancer and according to PSA screening to distinguish men with prostate cancer from men with benign prostatic hyperplasia.
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.
Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-03-01
Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.
Overlay accuracy on a flexible web with a roll printing process based on a roll-to-roll system.
Chang, Jaehyuk; Lee, Sunggun; Lee, Ki Beom; Lee, Seungjun; Cho, Young Tae; Seo, Jungwoo; Lee, Sukwon; Jo, Gugrae; Lee, Ki-yong; Kong, Hyang-Shik; Kwon, Sin
2015-05-01
For high-quality flexible devices from printing processes based on Roll-to-Roll (R2R) systems, overlay alignment during the patterning of each functional layer poses a major challenge. The reason is because flexible substrates have a relatively low stiffness compared with rigid substrates, and they are easily deformed during web handling in the R2R system. To achieve a high overlay accuracy for a flexible substrate, it is important not only to develop web handling modules (such as web guiding, tension control, winding, and unwinding) and a precise printing tool but also to control the synchronization of each unit in the total system. A R2R web handling system and reverse offset printing process were developed in this work, and an overlay between the 1st and 2nd layers of ±5μm on a 500 mm-wide film was achieved at a σ level of 2.4 and 2.8 (x and y directions, respectively) in a continuous R2R printing process. This paper presents the components and mechanisms used in reverse offset printing based on a R2R system and the printing results including positioning accuracy and overlay alignment accuracy.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.
Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan
2016-09-01
Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot
Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun
2018-01-01
In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948
Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis
Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude
2016-01-01
Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122
NASA Astrophysics Data System (ADS)
Blaser, S.; Nebiker, S.; Cavegn, S.
2017-05-01
Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.
A test of the reward-value hypothesis.
Smith, Alexandra E; Dalecki, Stefan J; Crystal, Jonathon D
2017-03-01
Rats retain source memory (memory for the origin of information) over a retention interval of at least 1 week, whereas their spatial working memory (radial maze locations) decays within approximately 1 day. We have argued that different forgetting functions dissociate memory systems. However, the two tasks, in our previous work, used different reward values. The source memory task used multiple pellets of a preferred food flavor (chocolate), whereas the spatial working memory task provided access to a single pellet of standard chow-flavored food at each location. Thus, according to the reward-value hypothesis, enhanced performance in the source memory task stems from enhanced encoding/memory of a preferred reward. We tested the reward-value hypothesis by using a standard 8-arm radial maze task to compare spatial working memory accuracy of rats rewarded with either multiple chocolate or chow pellets at each location using a between-subjects design. The reward-value hypothesis predicts superior accuracy for high-valued rewards. We documented equivalent spatial memory accuracy for high- and low-value rewards. Importantly, a 24-h retention interval produced equivalent spatial working memory accuracy for both flavors. These data are inconsistent with the reward-value hypothesis and suggest that reward value does not explain our earlier findings that source memory survives unusually long retention intervals.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
Dimensional changes of acrylic resin denture bases: conventional versus injection-molding technique.
Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam
2014-07-01
Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding.
Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique
Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam
2014-01-01
Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050
NASA Astrophysics Data System (ADS)
Rhee, Jinyoung; Kim, Gayoung; Im, Jungho
2017-04-01
Three regions of Indonesia with different rainfall characteristics were chosen to develop drought forecast models based on machine learning. The 6-month Standardized Precipitation Index (SPI6) was selected as the target variable. The models' forecast skill was compared to the skill of long-range climate forecast models in terms of drought accuracy and regression mean absolute error (MAE). Indonesian droughts are known to be related to El Nino Southern Oscillation (ENSO) variability despite of regional differences as well as monsoon, local sea surface temperature (SST), other large-scale atmosphere-ocean interactions such as Indian Ocean Dipole (IOD) and Southern Pacific Convergence Zone (SPCZ), and local factors including topography and elevation. Machine learning models are thus to enhance drought forecast skill by combining local and remote SST and remote sensing information reflecting initial drought conditions to the long-range climate forecast model results. A total of 126 machine learning models were developed for the three regions of West Java (JB), West Sumatra (SB), and Gorontalo (GO) and six long-range climate forecast models of MSC_CanCM3, MSC_CanCM4, NCEP, NASA, PNU, POAMA as well as one climatology model based on remote sensing precipitation data, and 1 to 6-month lead times. When compared the results between the machine learning models and the long-range climate forecast models, West Java and Gorontalo regions showed similar characteristics in terms of drought accuracy. Drought accuracy of the long-range climate forecast models were generally higher than the machine learning models with short lead times but the opposite appeared for longer lead times. For West Sumatra, however, the machine learning models and the long-range climate forecast models showed similar drought accuracy. The machine learning models showed smaller regression errors for all three regions especially with longer lead times. Among the three regions, the machine learning models developed for Gorontalo showed the highest drought accuracy and the lowest regression error. West Java showed higher drought accuracy compared to West Sumatra, while West Sumatra showed lower regression error compared to West Java. The lower error in West Sumatra may be because of the smaller sample size used for training and evaluation for the region. Regional differences of forecast skill are determined by the effect of ENSO and the following forecast skill of the long-range climate forecast models. While shown somewhat high in West Sumatra, relative importance of remote sensing variables was mostly low in most cases. High importance of the variables based on long-range climate forecast models indicates that the forecast skill of the machine learning models are mostly determined by the forecast skill of the climate models.
Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian
2017-01-01
The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus
2016-01-01
The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093
Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus
2016-06-01
The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. © The Author 2016. Published by Oxford University Press.
Translational Imaging Spectroscopy for Proximal Sensing
Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian
2017-01-01
Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111
Crowdsourcing for translational research: analysis of biomarker expression using cancer microarrays
Lawson, Jonathan; Robinson-Vyas, Rupesh J; McQuillan, Janette P; Paterson, Andy; Christie, Sarah; Kidza-Griffiths, Matthew; McDuffus, Leigh-Anne; Moutasim, Karwan A; Shaw, Emily C; Kiltie, Anne E; Howat, William J; Hanby, Andrew M; Thomas, Gareth J; Smittenaar, Peter
2017-01-01
Background: Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing – enlisting help from the public – is a sufficiently accurate method to score such samples. Methods: We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers – bladder/ki67, lung/EGFR, and oesophageal/CD8 – to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants' accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples. Results: We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively). Conclusions: These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains. PMID:27959886
NUMERICAL INTEGRAL OF RESISTANCE COEFFICIENTS IN DIFFUSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Q. S., E-mail: zqs@ynao.ac.cn
2017-01-10
The resistance coefficients in the screened Coulomb potential of stellar plasma are evaluated to high accuracy. I have analyzed the possible singularities in the integral of scattering angle. There are possible singularities in the case of an attractive potential. This may result in a problem for the numerical integral. In order to avoid the problem, I have used a proper scheme, e.g., splitting into many subintervals where the width of each subinterval is determined by the variation of the integrand, to calculate the scattering angle. The collision integrals are calculated by using Romberg’s method, therefore the accuracy is high (i.e.,more » ∼10{sup −12}). The results of collision integrals and their derivatives for −7 ≤ ψ ≤ 5 are listed. By using Hermite polynomial interpolation from those data, the collision integrals can be obtained with an accuracy of 10{sup −10}. For very weakly coupled plasma ( ψ ≥ 4.5), analytical fittings for collision integrals are available with an accuracy of 10{sup −11}. I have compared the final results of resistance coefficients with other works and found that, for a repulsive potential, the results are basically the same as others’; for an attractive potential, the results in cases of intermediate and strong coupling show significant differences. The resulting resistance coefficients are tested in the solar model. Comparing with the widely used models of Cox et al. and Thoul et al., the resistance coefficients in the screened Coulomb potential lead to a slightly weaker effect in the solar model, which is contrary to the expectation of attempts to solve the solar abundance problem.« less
van den Broek, Frank J C; Fockens, Paul; Van Eeden, Susanne; Kara, Mohammed A; Hardwick, James C H; Reitsma, Johannes B; Dekker, Evelien
2009-03-01
Endoscopic trimodal imaging (ETMI) incorporates high-resolution endoscopy (HRE) and autofluorescence imaging (AFI) for adenoma detection, and narrow-band imaging (NBI) for differentiation of adenomas from nonneoplastic polyps. The aim of this study was to compare AFI with HRE for adenoma detection and to assess the diagnostic accuracy of NBI for differentiation of polyps. This was a randomized trial of tandem colonoscopies. The study was performed at the Academic Medical Center in Amsterdam. One hundred patients underwent colonoscopy with ETMI. Each colonic segment was examined twice for polyps, once with HRE and once with AFI, in random order per patient. All detected polyps were assessed with NBI for pit pattern and with AFI for color, and subsequently removed. Histopathology served as the gold standard for diagnosis. The main outcome measures of this study were adenoma miss-rates of AFI and HRE, and diagnostic accuracy of NBI and AFI for differentiating adenomas from nonneoplastic polyps. Among 50 patients examined with AFI first, 32 adenomas were detected initially. Subsequent inspection with HRE identified 8 additional adenomas. Among 50 patients examined with HRE first, 35 adenomas were detected initially. Successive AFI yielded 14 additional adenomas. The adenoma miss-rates of AFI and HRE therefore were 20% and 29%, respectively (P = .351). The sensitivity, specificity, and overall accuracy of NBI for differentiation were 90%, 70%, and 79%, respectively; corresponding figures for AFI were 99%, 35%, and 63%, respectively. The overall adenoma miss-rate was 25%; AFI did not significantly reduce the adenoma miss-rate compared with HRE. Both NBI and AFI had a disappointing diagnostic accuracy for polyp differentiation, although AFI had a high sensitivity.
Dusenberry, Michael W; Brown, Charles K; Brewer, Kori L
2017-02-01
To construct an artificial neural network (ANN) model that can predict the presence of acute CT findings with both high sensitivity and high specificity when applied to the population of patients≥age 65years who have incurred minor head injury after a fall. An ANN was created in the Python programming language using a population of 514 patients ≥ age 65 years presenting to the ED with minor head injury after a fall. The patient dataset was divided into three parts: 60% for "training", 20% for "cross validation", and 20% for "testing". Sensitivity, specificity, positive and negative predictive values, and accuracy were determined by comparing the model's predictions to the actual correct answers for each patient. On the "cross validation" data, the model attained a sensitivity ("recall") of 100.00%, specificity of 78.95%, PPV ("precision") of 78.95%, NPV of 100.00%, and accuracy of 88.24% in detecting the presence of positive head CTs. On the "test" data, the model attained a sensitivity of 97.78%, specificity of 89.47%, PPV of 88.00%, NPV of 98.08%, and accuracy of 93.14% in detecting the presence of positive head CTs. ANNs show great potential for predicting CT findings in the population of patients ≥ 65 years of age presenting with minor head injury after a fall. As a good first step, the ANN showed comparable sensitivity, predictive values, and accuracy, with a much higher specificity than the existing decision rules in clinical usage for predicting head CTs with acute intracranial findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Oliver, D; Kotlicka-Antczak, M; Minichino, A; Spada, G; McGuire, P; Fusar-Poli, P
2018-03-01
Primary indicated prevention is reliant on accurate tools to predict the onset of psychosis. The gold standard assessment for detecting individuals at clinical high risk (CHR-P) for psychosis in the UK and many other countries is the Comprehensive Assessment for At Risk Mental States (CAARMS). While the prognostic accuracy of CHR-P instruments has been assessed in general, this is the first study to specifically analyse that of the CAARMS. As such, the CAARMS was used as the index test, with the reference index being psychosis onset within 2 years. Six independent studies were analysed using MIDAS (STATA 14), with a total of 1876 help-seeking subjects referred to high risk services (CHR-P+: n=892; CHR-P-: n=984). Area under the curve (AUC), summary receiver operating characteristic curves (SROC), quality assessment, likelihood ratios, and probability modified plots were computed, along with sensitivity analyses and meta-regressions. The current meta-analysis confirmed that the 2-year prognostic accuracy of the CAARMS is only acceptable (AUC=0.79 95% CI: 0.75-0.83) and not outstanding as previously reported. In particular, specificity was poor. Sensitivity of the CAARMS is inferior compared to the SIPS, while specificity is comparably low. However, due to the difficulties in performing these types of studies, power in this meta-analysis was low. These results indicate that refining and improving the prognostic accuracy of the CAARMS should be the mainstream area of research for the next era. Avenues of prediction improvement are critically discussed and presented to better benefit patients and improve outcomes of first episode psychosis. Copyright © 2017 The Authors. Published by Elsevier Masson SAS.. All rights reserved.
Generation of a high-accuracy regional DEM based on ALOS/PRISM imagery of East Antarctica
NASA Astrophysics Data System (ADS)
Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi
2017-12-01
A digital elevation model (DEM) is used to estimate ice-flow velocities for an ice sheet and glaciers via Differential Interferometric Synthetic Aperture Radar (DInSAR) processing. The accuracy of DInSAR-derived displacement estimates depends upon the accuracy of the DEM. Therefore, we used stereo optical images, obtained with a panchromatic remote-sensing instrument for stereo mapping (PRISM) sensor mounted onboard the Advanced Land Observing Satellite (ALOS), to produce a new DEM ("PRISM-DEM") of part of the coastal region of Lützow-Holm Bay in Dronning Maud Land, East Antarctica. We verified the accuracy of the PRISM-DEM by comparing ellipsoidal heights with those of existing DEMs and values obtained by satellite laser altimetry (ICESat/GLAS) and Global Navigation Satellite System surveying. The accuracy of the PRISM-DEM is estimated to be 2.80 m over ice sheet, 4.86 m over individual glaciers, and 6.63 m over rock outcrops. By comparison, the estimated accuracy of the ASTER-GDEM, widely used in polar regions, is 33.45 m over ice sheet, 14.61 m over glaciers, and 19.95 m over rock outcrops. For displacement measurements made along the radar line-of-sight by DInSAR, in conjunction with ALOS/PALSAR data, the accuracy of the PRISM-DEM and ASTER-GDEM correspond to estimation errors of <6.3 mm and <31.8 mm, respectively.
NASA Astrophysics Data System (ADS)
Störkle, Denis Daniel; Seim, Patrick; Thyssen, Lars; Kuhlenkötter, Bernd
2016-10-01
This article describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet metal forming (ISF) machines, this system offers high geometrical form flexibility without the need of any part-dependent tools. The industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors present the influence of the part orientation and the forming sequence on the geometric accuracy. Their influence is illustrated with the help of various experimental results shown and interpreted within this article.
Modeling of profilometry with laser focus sensors
NASA Astrophysics Data System (ADS)
Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner
2011-05-01
Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.
Münßinger, Jana I.; Halder, Sebastian; Kleih, Sonja C.; Furdea, Adrian; Raco, Valerio; Hösle, Adi; Kübler, Andrea
2010-01-01
Brain–computer interfaces (BCIs) enable paralyzed patients to communicate; however, up to date, no creative expression was possible. The current study investigated the accuracy and user-friendliness of P300-Brain Painting, a new BCI application developed to paint pictures using brain activity only. Two different versions of the P300-Brain Painting application were tested: A colored matrix tested by a group of ALS-patients (n = 3) and healthy participants (n = 10), and a black and white matrix tested by healthy participants (n = 10). The three ALS-patients achieved high accuracies; two of them reaching above 89% accuracy. In healthy subjects, a comparison between the P300-Brain Painting application (colored matrix) and the P300-Spelling application revealed significantly lower accuracy and P300 amplitudes for the P300-Brain Painting application. This drop in accuracy and P300 amplitudes was not found when comparing the P300-Spelling application to an adapted, black and white matrix of the P300-Brain Painting application. By employing a black and white matrix, the accuracy of the P300-Brain Painting application was significantly enhanced and reached the accuracy of the P300-Spelling application. ALS-patients greatly enjoyed P300-Brain Painting and were able to use the application with the same accuracy as healthy subjects. P300-Brain Painting enables paralyzed patients to express themselves creatively and to participate in the prolific society through exhibitions. PMID:21151375
Pineles, Lisa L; Morgan, Daniel J; Limper, Heather M; Weber, Stephen G; Thom, Kerri A; Perencevich, Eli N; Harris, Anthony D; Landon, Emily
2014-02-01
Hand hygiene (HH) is a critical part of infection prevention in health care settings. Hospitals around the world continuously struggle to improve health care personnel (HCP) HH compliance. The current gold standard for monitoring compliance is direct observation; however, this method is time-consuming and costly. One emerging area of interest involves automated systems for monitoring HH behavior such as radiofrequency identification (RFID) tracking systems. To assess the accuracy of a commercially available RFID system in detecting HCP HH behavior, we compared direct observation with data collected by the RFID system in a simulated validation setting and to a real-life clinical setting over 2 hospitals. A total of 1,554 HH events was observed. Accuracy for identifying HH events was high in the simulated validation setting (88.5%) but relatively low in the real-life clinical setting (52.4%). This difference was significant (P < .01). Accuracy for detecting HCP movement into and out of patient rooms was also high in the simulated setting but not in the real-life clinical setting (100% on entry and exit in simulated setting vs 54.3% entry and 49.5% exit in real-life clinical setting, P < .01). In this validation study of an RFID system, almost half of the HH events were missed. More research is necessary to further develop these systems and improve accuracy prior to widespread adoption. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Pawar, Shivshakti D; Naik, Jayashri D; Prabhu, Priya; Jatti, Gajanan M; Jadhav, Sachin B; Radhe, B K
2017-01-01
India is currently becoming capital for diabetes mellitus. This significantly increasing incidence of diabetes putting an additional burden on health care in India. Unfortunately, half of diabetic individuals are unknown about their diabetic status. Hence, there is an emergent need of effective screening instrument to identify "diabetes risk" individuals. The aim is to evaluate and compare the diagnostic accuracy and clinical utility of Indian Diabetes Risk Score (IDRS) and Finnish Diabetes Risk Score (FINDRISC). This is retrospective, record-based study of diabetes detection camp organized by a teaching hospital. Out of 780 people attended this camp voluntarily only 763 fulfilled inclusion criteria of the study. In this camp, pro forma included the World Health Organization STEP guidelines for surveillance of noncommunicable diseases. Included primary sociodemographic characters, physical measurements, and clinical examination. After that followed the random blood glucose estimation of each individual. Diagnostic accuracy of IDRS and FINDRISC compared by using receiver operative characteristic curve (ROC). Sensitivity, specificity, likelihood ratio, positive predictive and negative predictive values were compared. Clinical utility index (CUI) of each score also compared. SPSS version 22, Stata 13, R3.2.9 used. Out of 763 individuals, 38 were new diabetics. By IDRS 347 and by FINDRISC 96 people were included in high-risk category for diabetes. Odds ratio for high-risk people in FINDRISC for getting affected by diabetes was 10.70. Similarly, it was 4.79 for IDRS. Area under curves of ROCs of both scores were indifferent ( P = 0.98). Sensitivity and specificity of IDRS was 78.95% and 56.14%; whereas for FINDRISC it was 55.26% and 89.66%, respectively. CUI was excellent (0.86) for FINDRISC while IDRS it was "satisfactory" (0.54). Bland-Altman plot and Cohen's Kappa suggested fair agreement between these score in measuring diabetes risk. Diagnostic accuracy and clinical utility of FINDRISC is fairly good than IDRS.
Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J
2016-10-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.
2016-01-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Ammer, F.K.; Wood, P.B.; McPherson, R.J.
2008-01-01
Correct gender identification in monomorphic species is often difficult especially if males and females do not display obvious behavioral and breeding differences. We compared gender specific morphology and behavior with recently developed DNA techniques for gender identification in the monomorphic Grasshopper Sparrow (Ammodramus savannarum). Gender was ascertained with DNA in 213 individuals using the 2550F/2718R primer set and 3% agarose gel electrophoresis. Field observations using behavior and breeding characteristics to identify gender matched DNA analyses with 100% accuracy for adult males and females. Gender was identified with DNA for all captured juveniles that did not display gender specific traits or behaviors in the field. The molecular techniques used offered a high level of accuracy and may be useful in studies of dispersal mechanisms and winter assemblage composition in monomorphic species.
A simple video-based timing system for on-ice team testing in ice hockey: a technical report.
Larson, David P; Noonan, Benjamin C
2014-09-01
The purpose of this study was to describe and evaluate a newly developed on-ice timing system for team evaluation in the sport of ice hockey. We hypothesized that this new, simple, inexpensive, timing system would prove to be highly accurate and reliable. Six adult subjects (age 30.4 ± 6.2 years) performed on ice tests of acceleration and conditioning. The performance times of the subjects were recorded using a handheld stopwatch, photocell, and high-speed (240 frames per second) video. These results were then compared to allow for accuracy calculations of the stopwatch and video as compared with filtered photocell timing that was used as the "gold standard." Accuracy was evaluated using maximal differences, typical error/coefficient of variation (CV), and intraclass correlation coefficients (ICCs) between the timing methods. The reliability of the video method was evaluated using the same variables in a test-retest analysis both within and between evaluators. The video timing method proved to be both highly accurate (ICC: 0.96-0.99 and CV: 0.1-0.6% as compared with the photocell method) and reliable (ICC and CV within and between evaluators: 0.99 and 0.08%, respectively). This video-based timing method provides a very rapid means of collecting a high volume of very accurate and reliable on-ice measures of skating speed and conditioning, and can easily be adapted to other testing surfaces and parameters.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
A comparison of cosmological hydrodynamic codes
NASA Technical Reports Server (NTRS)
Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.
1994-01-01
We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.
NASA Astrophysics Data System (ADS)
Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude
2010-02-01
Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.
Model-based phase-shifting interferometer
NASA Astrophysics Data System (ADS)
Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.
High-accuracy peak picking of proteomics data using wavelet techniques.
Lange, Eva; Gröpl, Clemens; Reinert, Knut; Kohlbacher, Oliver; Hildebrandt, Andreas
2006-01-01
A new peak picking algorithm for the analysis of mass spectrometric (MS) data is presented. It is independent of the underlying machine or ionization method, and is able to resolve highly convoluted and asymmetric signals. The method uses the multiscale nature of spectrometric data by first detecting the mass peaks in the wavelet-transformed signal before a given asymmetric peak function is fitted to the raw data. In an optional third stage, the resulting fit can be further improved using techniques from nonlinear optimization. In contrast to currently established techniques (e.g. SNAP, Apex) our algorithm is able to separate overlapping peaks of multiply charged peptides in ESI-MS data of low resolution. Its improved accuracy with respect to peak positions makes it a valuable preprocessing method for MS-based identification and quantification experiments. The method has been validated on a number of different annotated test cases, where it compares favorably in both runtime and accuracy with currently established techniques. An implementation of the algorithm is freely available in our open source framework OpenMS.