Thematic Accuracy Assessment of the 2011 National Land ...
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)
Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne
2017-01-01
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other land cover change accuracy assessments.
Accuracy assessment of NLCD 2006 land cover and impervious surface
Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.
2013-01-01
Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Junod, Olivier; de Roten, Yves; Martinez, Elena; Drapeau, Martin; Despland, Jean-Nicolas
2005-12-01
This pilot study examined the accuracy of therapist defence interpretations (TAD) in high-alliance patients (N = 7) and low-alliance patients (N = 8). TAD accuracy was assessed in the two subgroups by comparing for each case the patient's most frequent defensive level with the most frequent defensive level addressed by the therapist when making defence interpretations. Results show that in high-alliance patient-therapist dyads, the therapists tend to address accurate or higher (more mature) defensive level than patients most frequent level. On the other hand, the therapists address lower (more immature) defensive level in low-alliance dyads. These results are discussed along with possible ways to better assess TAD accuracy.
Fitzpatrick, Katherine A.
1975-01-01
Accuracy analyses for the land use maps of the Central Atlantic Regional Ecological Test Site were performed for a 1-percent sample of the area. Researchers compared Level II land use maps produced at three scales, 1:24,000, 1:100,000, and 1:250,000 from high-altitude photography, with each other and with point data obtained in the field. They employed the same procedures to determine the accuracy of the Level I land use maps produced at 1:250,000 from high-altitude photography and color composite ERTS imagery. The accuracy of the Level II maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000, and 73.0 percent at 1:250,000. The accuracy of the Level I 1:250,000 maps produced from high-altitude aircraft photography was 76.5 percent and for those produced from ERTS imagery was 69.5 percent The cost of Level II land use mapping at 1:24,000 was found to be high ($11.93 per km2 ). The cost of mapping at 1:100,000 ($1.75) was about 2 times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent. Level I land use maps, when mapped from highaltitude photography, were about 4 times as expensive as the maps produced from ERTS imagery, although the accuracy is 7.0 percent greater. The Level I land use category that is least accurately mapped from ERTS imagery is urban and built-up land in the non-urban areas; in the urbanized areas, built-up land is more reliably mapped.
Zhang, Yang; Xiao, Xiong; Zhang, Junting; Gao, Zhixian; Ji, Nan; Zhang, Liwei
2017-06-01
To evaluate the diagnostic accuracy of routine blood examinations and Cerebrospinal Fluid (CSF) lactate level for Post-neurosurgical Bacterial Meningitis (PBM) at a large sample-size of post-neurosurgical patients. The diagnostic accuracies of routine blood examinations and CSF lactate level to distinguish between PAM and PBM were evaluated with the values of the Area Under the Curve of the Receiver Operating Characteristic (AUC -ROC ) by retrospectively analyzing the datasets of post-neurosurgical patients in the clinical information databases. The diagnostic accuracy of routine blood examinations was relatively low (AUC -ROC <0.7). The CSF lactate level achieved rather high diagnostic accuracy (AUC -ROC =0.891; CI 95%, 0.852-0.922). The variables of patient age, operation duration, surgical diagnosis and postoperative days (the interval days between the neurosurgery and examinations) were shown to affect the diagnostic accuracy of these examinations. The variables were integrated with routine blood examinations and CSF lactate level by Fisher discriminant analysis to improve their diagnostic accuracy. As a result, the diagnostic accuracy of blood examinations and CSF lactate level was significantly improved with an AUC -ROC value=0.760 (CI 95%, 0.737-0.782) and 0.921 (CI 95%, 0.887-0.948) respectively. The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Alexander, R. H. (Principal Investigator); Fitzpatrick, K. A.
1975-01-01
The author has identified the following significant results. Level 2 land use maps produced at three scales (1:24,000, 1:100,000, and 1:250,000) from high altitude photography were compared with each other and with point data obtained in the field. The same procedures were employed to determine the accuracy of the Level 1 land use maps produced at 1:250,000 from high altitude photography and color composite ERTS imagery. Accuracy of the Level 2 maps was 84.9 percent at 1:24,000, 77.4 percent at 1:100,000 and 73.0 percent at 1:250,000. Accuracy of the Level 1 1:250,000 maps was 76.5 percent for aerial photographs and 69.5 percent for ERTS imagery. The cost of Level 2 land use mapping at 1:24,000 was found to be high ($11.93 per sq km). The cost of mapping at 1:100,000 ($1.75) was about two times as expensive as mapping at 1:250,000 ($.88), and the accuracy increased by only 4.4 percent.
Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey
NASA Technical Reports Server (NTRS)
Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.;
2016-01-01
The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.
Wening, Stefanie; Keith, Nina; Abele, Andrea E
2016-06-01
In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.
2015-09-01
this report made use of posttest processing techniques to provide packet-level time tagging with an accuracy close to 3 µs relative to Coordinated...h set of test records. The process described herein made use of posttest processing techniques to provide packet-level time tagging with an accuracy
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Gigahertz single-electron pumping in silicon with an accuracy better than 9.2 parts in 10{sup 7}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamahata, Gento, E-mail: yamahata.gento@lab.ntt.co.jp; Karasawa, Takeshi; Fujiwara, Akira
2016-07-04
High-speed and high-accuracy pumping of a single electron is crucial for realizing an accurate current source, which is a promising candidate for a quantum current standard. Here, using a high-accuracy measurement system traceable to primary standards, we evaluate the accuracy of a Si tunable-barrier single-electron pump driven by a single sinusoidal signal. The pump operates at frequencies up to 6.5 GHz, producing a current of more than 1 nA. At 1 GHz, the current plateau with a level of about 160 pA is found to be accurate to better than 0.92 ppm (parts per million), which is a record value for 1-GHz operation. At 2 GHz,more » the current plateau offset from 1ef (∼320 pA) by 20 ppm is observed. The current quantization accuracy is improved by applying a magnetic field of 14 T, and we observe a current level of 1ef with an accuracy of a few ppm. The presented gigahertz single-electron pumping with a high accuracy is an important step towards a metrological current standard.« less
Adaptation and fallibility in experts' judgments of novice performers.
Larson, Jeffrey S; Billeter, Darron M
2017-02-01
Competition judges are often selected for their expertise, under the belief that a high level of performance expertise should enable accurate judgments of the competitors. Contrary to this assumption, we find evidence that expertise can reduce judgment accuracy. Adaptation level theory proposes that discriminatory capacity decreases with greater distance from one's adaptation level. Because experts' learning has produced an adaptation level close to ideal performance standards, they may be less able to discriminate among lower-level competitors. As a result, expertise increases judgment accuracy of high-level competitions but decreases judgment accuracy of low-level competitions. Additionally, we demonstrate that, consistent with an adaptation level theory account of expert judgment, experts systematically give more critical ratings than intermediates or novices. In summary, this work demonstrates a systematic change in human perception that occurs as task learning increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Dubroca, Guilhem; Richert, Michaël.; Loiseaux, Didier; Caron, Jérôme; Bézy, Jean-Loup
2015-09-01
To increase the accuracy of earth-observation spectro-imagers, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred device in space instrument is the so-called polarization scrambler. It is made of birefringent crystal wedges arranged in a single or dual Babinet. Today, with required radiometric accuracies of the order of 0.1%, it is necessary to develop tools to find optimal and low sensitivity solutions quickly and to measure the performances with a high level of accuracy.
Anxiety, anticipation and contextual information: A test of attentional control theory.
Cocks, Adam J; Jackson, Robin C; Bishop, Daniel T; Williams, A Mark
2016-09-01
We tested the assumptions of Attentional Control Theory (ACT) by examining the impact of anxiety on anticipation using a dynamic, time-constrained task. Moreover, we examined the involvement of high- and low-level cognitive processes in anticipation and how their importance may interact with anxiety. Skilled and less-skilled tennis players anticipated the shots of opponents under low- and high-anxiety conditions. Participants viewed three types of video stimuli, each depicting different levels of contextual information. Performance effectiveness (response accuracy) and processing efficiency (response accuracy divided by corresponding mental effort) were measured. Skilled players recorded higher levels of response accuracy and processing efficiency compared to less-skilled counterparts. Processing efficiency significantly decreased under high- compared to low-anxiety conditions. No difference in response accuracy was observed. When reviewing directional errors, anxiety was most detrimental to performance in the condition conveying only contextual information, suggesting that anxiety may have a greater impact on high-level (top-down) cognitive processes, potentially due to a shift in attentional control. Our findings provide partial support for ACT; anxiety elicited greater decrements in processing efficiency than performance effectiveness, possibly due to predominance of the stimulus-driven attentional system.
Thz and Long Path Fourier Transform Spectroscopy of Methanol; Torsionally Coupled High-K Levels
NASA Astrophysics Data System (ADS)
Pearson, John C.; Yu, Shanshan; Drouin, Brian J.; Lees, Ronald M.; Xu, Li-Hong; Billinghurst, Brant E.
2012-06-01
Methanol is nearly ubiquitous in the interstellar gas. The presence of both a-type and b-type dipole moments, asymmetry, and internal rotation assure that any small astronomical observation window will contain multiple methanol transitions. This often allows a great deal about the local physical conditions to be deduced, but only insofar as the spectra are characterized. The Herschel Space Observatory has detected numerous, clearly beam diluted, methanol transitions with quanta surpassing J = 35 in many regions. Unfortunately, observations of methanol often display strong non-thermal behavior whose modeling requires many additional levels to be included in a radiative transfer analysis. Additionally, the intensities of many more highly excited transitions are strongly dependent on the accuracy of the wave functions used in the calculation. We report a combined Fourier Transform Infrared and THz study targeting the high J and K transitions in the ground torsional manifold. Microwave accuracy energy levels have been derived to J > 40 and K as high as 20. These levels illuminate a number of strongly resonant torsional interactions that dominate the high K spectrum of the molecule. Comparison with levels calculated from the rho-axis method Hamiltonian suggest that the rho-axis method should be able to model v_t = 0, 1 and probably v_t = 2 to experimental accuracy. The challenges in determining methanol wave functions to experimental accuracy will be discussed.
Accuracy testing of electric groundwater-level measurement tapes
Jelinski, Jim; Clayton, Christopher S.; Fulford, Janice M.
2015-01-01
The accuracy tests demonstrated that none of the electric-tape models tested consistently met the suggested USGS accuracy of ±0.01 ft. The test data show that the tape models in the study should give a water-level measurement that is accurate to roughly ±0.05 ft per 100 ft without additional calibration. To meet USGS accuracy guidelines, the electric-tape models tested will need to be individually calibrated. Specific conductance also plays a part in tape accuracy. The probes will not work in water with specific conductance values near zero, and the accuracy of one probe was unreliable in very high conductivity water (10,000 microsiemens per centimeter).
Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy
NASA Astrophysics Data System (ADS)
Fu, Juan; Chen, Xiaoqian; Huang, Yiyong
2013-12-01
It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.
Evidence for a confidence-accuracy relationship in memory for same- and cross-race faces.
Nguyen, Thao B; Pezdek, Kathy; Wixted, John T
2017-12-01
Discrimination accuracy is usually higher for same- than for cross-race faces, a phenomenon known as the cross-race effect (CRE). According to prior research, the CRE occurs because memories for same- and cross-race faces rely on qualitatively different processes. However, according to a continuous dual-process model of recognition memory, memories that rely on qualitatively different processes do not differ in recognition accuracy when confidence is equated. Thus, although there are differences in overall same- and cross-race discrimination accuracy, confidence-specific accuracy (i.e., recognition accuracy at a particular level of confidence) may not differ. We analysed datasets from four recognition memory studies on same- and cross-race faces to test this hypothesis. Confidence ratings reliably predicted recognition accuracy when performance was above chance levels (Experiments 1, 2, and 3) but not when performance was at chance levels (Experiment 4). Furthermore, at each level of confidence, confidence-specific accuracy for same- and cross-race faces did not significantly differ when overall performance was above chance levels (Experiments 1, 2, and 3) but significantly differed when overall performance was at chance levels (Experiment 4). Thus, under certain conditions, high-confidence same-race and cross-race identifications may be equally reliable.
NASA Technical Reports Server (NTRS)
Cibula, William G.; Nyquist, Maurice O.
1987-01-01
An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Establishment of a high accuracy geoid correction model and geodata edge match
NASA Astrophysics Data System (ADS)
Xi, Ruifeng
This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.
ERIC Educational Resources Information Center
Kidd, Teresa A.; Saudargas, Richard A.
1988-01-01
The study with two elementary students who had low levels of completion and accuracy on daily arithmetic assignments found that a negative consequence was not necessary and that use of a positive component alone was sufficient to maintain high levels of completion and accuracy. (Author/DB)
Dawson, Samantha L; Baker, Tim; Salzman, Scott
2015-04-01
There is little evidence that useful electronic data could be collected at Australian small rural emergency services. If in future their funding model changed to the Activity-Based Funding model, then they would need to collect and submit more data. We determine whether it is possible to collect episode-level data at six small rural emergency services and quantify the accuracy of eight fields. A prospective cross-sectional study. South-West Victoria, Australia. Six small rural emergency services. We collected and audited episode-level emergency data from participating services between 1 February 2011 and 31 January 2012. A random sample of these data were audited monthly. Research assistants located at each service supported data entry and audited data accuracy for four hours per week. Rates for data completeness, accuracy and total accuracy were calculated using audit data. Episode-level data were collected for 20 224 presentations across six facilities. The audit dataset consisted of 8.5% (1504/17 627) of presentations from five facilities. For all fields audited, the accuracy of entered data was high (>93%).Triage category was deemed appropriate for 95.9% (95% confidence interval (CI): 94.9-96.9%) of the patient records reviewed. Some procedures were missing (28.7%, 95%CI: 27.2-30.3%). No significant improvement in data accuracy over 12 months was observed. All six services collected useful episode-level data for 12-months with four hours per week of assistance. Data entry accuracy was high for all fields audited, and data entry completeness was low for procedures. © 2015 National Rural Health Alliance Inc.
10 CFR 60.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES General Provisions § 60.10 Completeness and accuracy of information. (a) Information...
Boucher, M.S.
1994-01-01
Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.
DOT National Transportation Integrated Search
2004-09-01
Conventionally, the road centerline surveys have : been performed by the traditional survey methods, : providing rather high, even sub-centimeter level of : accuracy. The major problem, however, that the : Departments of Transportation face, is the s...
High accuracy autonomous navigation using the global positioning system (GPS)
NASA Technical Reports Server (NTRS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
Monitoring Building Deformation with InSAR: Experiments and Validation.
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-12-20
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated.
The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2016-02-01
We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
A design of optical modulation system with pixel-level modulation accuracy
NASA Astrophysics Data System (ADS)
Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu
2018-01-01
Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.
High accuracy electronic material level sensor
McEwan, T.E.
1997-03-11
The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.
High accuracy electronic material level sensor
McEwan, Thomas E.
1997-01-01
The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.
NASA Astrophysics Data System (ADS)
Leskiw, Donald M.; Zhau, Junmei
2000-06-01
This paper reports on results from an ongoing project to develop methodologies for representing and managing multiple, concurrent levels of detail and enabling high performance computing using parallel arrays within distributed object-based simulation frameworks. At this time we present the methodology for representing and managing multiple, concurrent levels of detail and modeling accuracy by using a representation based on the Kalman approach for estimation. The Kalman System Model equations are used to represent model accuracy, Kalman Measurement Model equations provide transformations between heterogeneous levels of detail, and interoperability among disparate abstractions is provided using a form of the Kalman Update equations.
ERIC Educational Resources Information Center
Farrokhi, Farahman; Sattarpour, Simin
2012-01-01
The present article reports the findings of a study that explored(1) whether direct written corrective feedback (CF) can help high-proficient L2 learners, who has already achieved a rather high level of accuracy in English, improve in the accurate use of two functions of English articles (the use of "a" for first mention and…
Monitoring Building Deformation with InSAR: Experiments and Validation
Yang, Kui; Yan, Li; Huang, Guoman; Chen, Chu; Wu, Zhengpeng
2016-01-01
Synthetic Aperture Radar Interferometry (InSAR) techniques are increasingly applied for monitoring land subsidence. The advantages of InSAR include high accuracy and the ability to cover large areas; nevertheless, research validating the use of InSAR on building deformation is limited. In this paper, we test the monitoring capability of the InSAR in experiments using two landmark buildings; the Bohai Building and the China Theater, located in Tianjin, China. They were selected as real examples to compare InSAR and leveling approaches for building deformation. Ten TerraSAR-X images spanning half a year were used in Permanent Scatterer InSAR processing. These extracted InSAR results were processed considering the diversity in both direction and spatial distribution, and were compared with true leveling values in both Ordinary Least Squares (OLS) regression and measurement of error analyses. The detailed experimental results for the Bohai Building and the China Theater showed a high correlation between InSAR results and the leveling values. At the same time, the two Root Mean Square Error (RMSE) indexes had values of approximately 1 mm. These analyses show that a millimeter level of accuracy can be achieved by means of InSAR technique when measuring building deformation. We discuss the differences in accuracy between OLS regression and measurement of error analyses, and compare the accuracy index of leveling in order to propose InSAR accuracy levels appropriate for monitoring buildings deformation. After assessing the advantages and limitations of InSAR techniques in monitoring buildings, further applications are evaluated. PMID:27999403
Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe
2018-06-20
Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
Automatic multi-organ segmentation using learning-based segmentation and level set optimization.
Kohlberger, Timo; Sofka, Michal; Zhang, Jingdan; Birkbeck, Neil; Wetzl, Jens; Kaftan, Jens; Declerck, Jérôme; Zhou, S Kevin
2011-01-01
We present a novel generic segmentation system for the fully automatic multi-organ segmentation from CT medical images. Thereby we combine the advantages of learning-based approaches on point cloud-based shape representation, such a speed, robustness, point correspondences, with those of PDE-optimization-based level set approaches, such as high accuracy and the straightforward prevention of segment overlaps. In a benchmark on 10-100 annotated datasets for the liver, the lungs, and the kidneys we show that the proposed system yields segmentation accuracies of 1.17-2.89 mm average surface errors. Thereby the level set segmentation (which is initialized by the learning-based segmentations) contributes with an 20%-40% increase in accuracy.
ERIC Educational Resources Information Center
Faes, Jolien; Gillis, Joris; Gillis, Steven
2017-01-01
The frequency of occurrence of words and sounds has a pervasive influence on typically developing children's language acquisition. For instance, highly frequent words appear earliest in a child's lexicon, and highly frequent phonemes are produced more accurately. This study evaluates (a) whether word frequency influences word accuracy and (b)…
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
The effect of letter string length and report condition on letter recognition accuracy.
Raghunandan, Avesh; Karmazinaite, Berta; Rossow, Andrea S
Letter sequence recognition accuracy has been postulated to be limited primarily by low-level visual factors. The influence of high level factors such as visual memory (load and decay) has been largely overlooked. This study provides insight into the role of these factors by investigating the interaction between letter sequence recognition accuracy, letter string length and report condition. Letter sequence recognition accuracy for trigrams and pentagrams were measured in 10 adult subjects for two report conditions. In the complete report condition subjects reported all 3 or all 5 letters comprising trigrams and pentagrams, respectively. In the partial report condition, subjects reported only a single letter in the trigram or pentagram. Letters were presented for 100ms and rendered in high contrast, using black lowercase Courier font that subtended 0.4° at the fixation distance of 0.57m. Letter sequence recognition accuracy was consistently higher for trigrams compared to pentagrams especially for letter positions away from fixation. While partial report increased recognition accuracy in both string length conditions, the effect was larger for pentagrams, and most evident for the final letter positions within trigrams and pentagrams. The effect of partial report on recognition accuracy for the final letter positions increased as eccentricity increased away from fixation, and was independent of the inner/outer position of a letter. Higher-level visual memory functions (memory load and decay) play a role in letter sequence recognition accuracy. There is also suggestion of additional delays imposed on memory encoding by crowded letter elements. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska
Selkowitz, D.J.; Stehman, S.V.
2011-01-01
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.
NASA Astrophysics Data System (ADS)
Hu, Zhan; Lenting, Walther; van der Wal, Daphne; Bouma, Tjeerd
2015-04-01
Tidal flat morphology is continuously shaped by hydrodynamic force, resulting in highly dynamic bed elevations. The knowledge of short-term bed-level changes is important both for understanding sediment transport processes as well as for assessing critical ecological processes such as e.g. vegetation recruitment chances on tidal flats. Due to the labour involved, manual discontinuous measurements lack the ability to continuously monitor bed-elevation changes. Existing methods for automated continuous monitoring of bed-level changes lack vertical accuracy (e.g., Photo-Electronic Erosion Pin sensor and resistive rod) or limited in spatial application by using expensive technology (e.g., acoustic bed level sensors). A method provides sufficient accuracy with a reasonable cost is needed. In light of this, a high-accuracy sensor (2 mm) for continuously measuring short-term Surface-Elevation Dynamics (SED-sensor) was developed. This SED-sensor makes use of photovoltaic cells and operates stand-alone using internal power supply and data logging system. The unit cost and the labour in deployments is therefore reduced, which facilitates monitoring with a number of units. In this study, the performance of a group of SED-sensors is tested against data obtained with precise manual measurements using traditional Sediment Erosion Bars (SEB). An excellent agreement between the two methods was obtained, indicating the accuracy and precision of the SED-sensors. Furthermore, to demonstrate how the SED-sensors can be used for measuring short-term bed-level dynamics, two SED-sensors were deployed for 1 month at two sites with contrasting wave exposure conditions. Daily bed-level changes were obtained including a severe storm erosion event. The difference in observed bed-level dynamics at both sites was statistically explained by their different hydrodynamic conditions. Thus, the stand-alone SED-sensor can be applied to monitor sediment surface dynamics with high vertical and temporal resolutions, which provides opportunities to pinpoint morphological responses to various forces in a number of environments (e.g. tidal flats, beaches, rivers and dunes).
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Autonomous Navigation of the SSTI/Lewis Spacecraft Using the Global Positioning System (GPS)
NASA Technical Reports Server (NTRS)
Hart, R. C.; Long, A. C.; Lee, T.
1997-01-01
The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) is pursuing the application of Global Positioning System (GPS) technology to improve the accuracy and economy of spacecraft navigation. High-accuracy autonomous navigation algorithms are being flight qualified in conjunction with GSFC's GPS Attitude Determination Flyer (GADFLY) experiment on the Small Satellite Technology Initiative (SSTI) Lewis spacecraft, which is scheduled for launch in 1997. Preflight performance assessments indicate that these algorithms can provide a real-time total position accuracy of better than 10 meters (1 sigma) and velocity accuracy of better than 0.01 meter per second (1 sigma), with selective availability at typical levels. This accuracy is projected to improve to the 2-meter level if corrections to be provided by the GPS Wide Area Augmentation System (WAAS) are included.
ERIC Educational Resources Information Center
Kelly, Robert H.
2017-01-01
Despite almost a century of research, there is little consensus among researchers and educators about the role of oral reading accuracy in beginning reading progress of struggling readers. Should, for example, students be given easy books to read with high levels of accuracy to promote early reading development or does reading hard texts with…
Erby, Lori A H; Roter, Debra L; Biesecker, Barbara B
2011-11-01
To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. Copyright © 2010. Published by Elsevier Ireland Ltd.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
High accuracy transit photometry of the planet OGLE-TR-113b with a new deconvolution-based method
NASA Astrophysics Data System (ADS)
Gillon, M.; Pont, F.; Moutou, C.; Bouchy, F.; Courbin, F.; Sohy, S.; Magain, P.
2006-11-01
A high accuracy photometry algorithm is needed to take full advantage of the potential of the transit method for the characterization of exoplanets, especially in deep crowded fields. It has to reduce to the lowest possible level the negative influence of systematic effects on the photometric accuracy. It should also be able to cope with a high level of crowding and with large-scale variations of the spatial resolution from one image to another. A recent deconvolution-based photometry algorithm fulfills all these requirements, and it also increases the resolution of astronomical images, which is an important advantage for the detection of blends and the discrimination of false positives in transit photometry. We made some changes to this algorithm to optimize it for transit photometry and used it to reduce NTT/SUSI2 observations of two transits of OGLE-TR-113b. This reduction has led to two very high precision transit light curves with a low level of systematic residuals, used together with former photometric and spectroscopic measurements to derive new stellar and planetary parameters in excellent agreement with previous ones, but significantly more precise.
Region based Brain Computer Interface for a home control application.
Akman Aydin, Eda; Bay, Omer Faruk; Guler, Inan
2015-08-01
Environment control is one of the important challenges for disabled people who suffer from neuromuscular diseases. Brain Computer Interface (BCI) provides a communication channel between the human brain and the environment without requiring any muscular activation. The most important expectation for a home control application is high accuracy and reliable control. Region-based paradigm is a stimulus paradigm based on oddball principle and requires selection of a target at two levels. This paper presents an application of region based paradigm for a smart home control application for people with neuromuscular diseases. In this study, a region based stimulus interface containing 49 commands was designed. Five non-disabled subjects were attended to the experiments. Offline analysis results of the experiments yielded 95% accuracy for five flashes. This result showed that region based paradigm can be used to select commands of a smart home control application with high accuracy in the low number of repetitions successfully. Furthermore, a statistically significant difference was not observed between the level accuracies.
Li, Youfang; Wang, Yumiao; Zhang, Renzhong; Wang, Jue; Li, Zhiqing; Wang, Ling; Pan, Songfeng; Yang, Yanling; Ma, Yanling; Jia, Manhong
2016-01-01
To understood the accuracy of oral fluid-based rapid HIV self-testing among men who have sex with men (MSM) and related factors. Survey was conducted among MSM selected through non-probability sampling to evaluate the quality of their rapid HIV self-testing, and related information was analyzed. The most MSM were aged 21-30 years (57.0%). Among them, 45.7% had educational level of college or above, 78.5% were unmarried, 59.3% were casual laborers. The overall accuracy rate of oral fluid based self-testing was 95.0%, the handling of"inserting test paper into tube as indicated by arrow on it"had the highest accuracy rate (98.0%), and the handling of"gently upsetting tube for 3 times"had lowest accuracy rate (65.0%); Chi-square analysis showed that educational level, no touch with middle part of test paper, whether reading the instruction carefully, whether understanding the instruction and inserting test paper into tube as indicated by the arrow on it were associated with the accuracy of oral fluid-based rapid HIV self-testing, (P<0.05). Multivariate logistic regression analysis indicated that educational level, no touch with middle part of test paper and understanding instructions were associated with the accuracy of oral fluid-based rapid HIV self-testing. The accuracy of oral fluid-based rapid HIV self-testing was high among MSM, the accuracy varied with the educational level of the MSM. Touch with the middle part of test paper or not and understanding the instructions or not might influence the accuracy of the self-testing.
Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus
2016-01-01
The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093
Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus
2016-06-01
The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. © The Author 2016. Published by Oxford University Press.
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
Meijer, Willemien A; Van Gerven, Pascal W; de Groot, Renate H; Van Boxtel, Martin P; Jolles, Jelle
2007-10-01
The aim of the present study was to examine whether deeper processing of words during encoding in middle-aged adults leads to a smaller increase in word-learning performance and a smaller decrease in retrieval effort than in young adults. It was also assessed whether high education attenuates age-related differences in performance. Accuracy of recall and recognition, and reaction times of recognition, after performing incidental and intentional learning tasks were compared between 40 young (25-35) and 40 middle-aged (50-60) adults with low and high educational levels. Age differences in recall increased with depth of processing, whereas age differences in accuracy and reaction times of recognition did not differ across levels. High education does not moderate age-related differences in performance. These findings suggest a smaller benefit of deep processing in middle age, when no retrieval cues are available.
Memorizing: a test of untrained mildly mentally retarded children's problem-solving.
Belmont, J M; Ferretti, R P; Mitchell, D W
1982-09-01
Forty untrained mildly mentally retarded and 32 untrained nonretarded junior high school students were given eight trails of practice on a self-paced memory problem with lists of letters or words. For each trail a new list was presented, requiring ordered recall of terminal list items followed by ordered recall of initial items. Subgroups of solvers and nonsolvers were identified at each IQ level by a criterion of strict recall accuracy. Direct measures of mnemonic activity showed that over trails, solvers at both IQ levels increasingly fit a theoretically ideal memorization method. At neither IQ level did nonsolvers show similar inventions. On early trials, for both IQ levels, fit to the ideal method was uncorrelated with recall accuracy. On late trials fit and recall were highly correlated at each IQ level and across levels. The results support a problem-solving theory of individual differences in retarded and nonretarded children's memory performances.
Dy, Christopher J; Taylor, Samuel A; Patel, Ronak M; Kitay, Alison; Roberts, Timothy R; Daluiski, Aaron
2012-09-01
Recent emphasis on shared decision making and patient-centered research has increased the importance of patient education and health literacy. The internet is rapidly growing as a source of self-education for patients. However, concern exists over the quality, accuracy, and readability of the information. Our objective was to determine whether the quality, accuracy, and readability of information online about distal radius fractures vary with the search term. This was a prospective evaluation of 3 search engines using 3 different search terms of varying sophistication ("distal radius fracture," "wrist fracture," and "broken wrist"). We evaluated 70 unique Web sites for quality, accuracy, and readability. We used comparative statistics to determine whether the search term affected the quality, accuracy, and readability of the Web sites found. Three orthopedic surgeons independently gauged quality and accuracy of information using a set of predetermined scoring criteria. We evaluated the readability of the Web site using the Fleisch-Kincaid score for reading grade level. There were significant differences in the quality, accuracy, and readability of information found, depending on the search term. We found higher quality and accuracy resulted from the search term "distal radius fracture," particularly compared with Web sites resulting from the term "broken wrist." The reading level was higher than recommended in 65 of the 70 Web sites and was significantly higher when searching with "distal radius fracture" than "wrist fracture" or "broken wrist." There was no correlation between Web site reading level and quality or accuracy. The readability of information about distal radius fractures in most Web sites was higher than the recommended reading level for the general public. The quality and accuracy of the information found significantly varied with the sophistication of the search term used. Physicians, professional societies, and search engines should consider efforts to improve internet access to high-quality information at an understandable level. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Assessment of craniometric traits in South Indian dry skulls for sex determination.
Ramamoorthy, Balakrishnan; Pai, Mangala M; Prabhu, Latha V; Muralimanju, B V; Rai, Rajalakshmi
2016-01-01
The skeleton plays an important role in sex determination in forensic anthropology. The skull bone is considered as the second best after the pelvic bone in sex determination due to its better retention of morphological features. Different populations have varying skeletal characteristics, making population specific analysis for sex determination essential. Hence the objective of this investigation is to obtain the accuracy of sex determination using cranial parameters of adult skulls to the highest percentage in South Indian population and to provide a baseline data for sex determination in South India. Seventy adult preserved human skulls were taken and based on the morphological traits were classified into 43 male skulls and 27 female skulls. A total of 26 craniometric parameters were studied. The data were analyzed by using the SPSS discriminant function. The analysis of stepwise, multivariate, and univariate discriminant function gave an accuracy of 77.1%, 85.7%, and 72.9% respectively. Multivariate direct discriminant function analysis classified skull bones into male and female with highest levels of accuracy. Using stepwise discriminant function analysis, the most dimorphic variable to determine sex of the skull, was biauricular breadth followed by weight. Subjecting the best dimorphic variables to univariate discriminant analysis, high levels of accuracy of sexual dimorphism was obtained. Percentage classification of high accuracies were obtained in this study indicating high level of sexual dimorphism in the crania, setting specific discriminant equations for the gender determination in South Indian people. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Holcomb, H H; Ritzl, E K; Medoff, D R; Nevitt, J; Gordon, B; Tamminga, C A
1995-06-29
Psychophysical and cognitive studies carried out in schizophrenic patients show high within-group performance variance and sizable differences between patients and normal volunteers. Experimental manipulation of a target's signal-to-noise characteristics can, however, make a given task more or less difficult for a given subject. Such signal-to-noise manipulations can substantially reduce performance differences between individuals. Frequency and presentation level (volume) changes of an auditory tone can make a sound more or less difficult to recognize. This study determined how the discrimination accuracy of medicated schizophrenic patients and normal volunteers changed when the frequency difference between two tones (high frequency vs. low frequency) and the presentation levels of tones were systematically degraded. The investigators hypothesized that each group would become impaired in its discrimination accuracy when tone signals were degraded by making the frequencies more similar and the presentation levels lower. Schizophrenic patients were slower and less accurate than normal volunteers on tests using four tone levels and two frequency differences; the schizophrenic patient group showed a significant decrement in accuracy when the signal-to-noise characteristics of the target tones were degraded. The benefits of controlling stimulus discrimination difficulty in functional imaging paradigms are discussed.
Evaluating the Quality, Accuracy, and Readability of Online Resources Pertaining to Hallux Valgus.
Tartaglione, Jason P; Rosenbaum, Andrew J; Abousayed, Mostafa; Hushmendy, Shazaan F; DiPreta, John A
2016-02-01
The Internet is one of the most widely utilized resources for health-related information. Evaluation of the medical literature suggests that the quality and accuracy of these resources are poor and written at inappropriately high reading levels. The purpose of our study was to evaluate the quality, accuracy, and readability of online resources pertaining to hallux valgus. Two search terms ("hallux valgus" and "bunion") were entered into Google, Yahoo, and Bing. With the use of scoring criteria specific to hallux valgus, the quality and accuracy of online information related to hallux valgus was evaluated by 3 reviewers. The Flesch-Kincaid score was used to determine readability. Statistical analysis was performed with t tests and significance was determined by P values <.05. Sixty-two unique websites were evaluated. Quality was significantly higher with use of the search term "bunion" as compared to "hallux valgus" (P = .045). Quality and accuracy were significantly higher in resources authored by physicians as compared to nonphysicians (quality, P = .04; accuracy, P < .001) and websites without commercial bias (quality, P = .038; accuracy, P = .011). However, the reading level was significantly more advanced for websites authored by physicians (P = .035). Websites written above an eighth-grade reading level were significantly more accurate than those written at or below an eighth-grade reading level (P = .032). The overall quality of online information related to hallux valgus is poor and written at inappropriate reading levels. Furthermore, the search term used, authorship, and presence of commercial bias influence the value of these materials. It is important for orthopaedic surgeons to become familiar with patient education materials, so that appropriate recommendations can be made regarding valuable resources. Level IV. © 2015 The Author(s).
Barth, Amy E.; Barnes, Marcia; Francis, David J.; Vaughn, Sharon; York, Mary
2015-01-01
Separate mixed model analyses of variance (ANOVA) were conducted to examine the effect of textual distance on the accuracy and speed of text consistency judgments among adequate and struggling comprehenders across grades 6–12 (n = 1203). Multiple regressions examined whether accuracy in text consistency judgments uniquely accounted for variance in comprehension. Results suggest that there is considerable growth across the middle and high school years, particularly for adequate comprehenders in those text integration processes that maintain local coherence. Accuracy in text consistency judgments accounted for significant unique variance for passage-level, but not sentence-level comprehension, particularly for adequate comprehenders. PMID:26166946
10 CFR 63.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A.... Notification must be provided to the Director of Nuclear Material Safety and Safeguards, U.S. Nuclear...
10 CFR 60.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN... provided to the Administrator of the appropriate Regional Office within two working days of identifying the...
10 CFR 60.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN... provided to the Administrator of the appropriate Regional Office within two working days of identifying the...
10 CFR 60.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN... provided to the Administrator of the appropriate Regional Office within two working days of identifying the...
10 CFR 63.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A.... Notification must be provided to the Director of Nuclear Material Safety and Safeguards, U.S. Nuclear...
10 CFR 63.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A.... Notification must be provided to the Director of Nuclear Material Safety and Safeguards, U.S. Nuclear...
10 CFR 63.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A.... Notification must be provided to the Director of Nuclear Material Safety and Safeguards, U.S. Nuclear...
10 CFR 60.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 60.10 Section 60.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN... provided to the Administrator of the appropriate Regional Office within two working days of identifying the...
10 CFR 63.10 - Completeness and accuracy of information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A.... Notification must be provided to the Director of Nuclear Material Safety and Safeguards, U.S. Nuclear...
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Zhang, Dake; Stecker, Pamela; Huckabee, Sloan; Miller, Rhonda
2016-09-01
Research has suggested that different strategies used when solving fraction problems are highly correlated with students' problem-solving accuracy. This study (a) utilized latent profile modeling to classify students into three different strategic developmental levels in solving fraction comparison problems and (b) accordingly provided differentiated strategic training for students starting from two different strategic developmental levels. In Study 1 we assessed 49 middle school students' performance on fraction comparison problems and categorized students into three clusters of strategic developmental clusters: a cross-multiplication cluster with the highest accuracy, a representation strategy cluster with medium accuracy, and a whole-number strategy cluster with the lowest accuracy. Based on the strategic developmental levels identified in Study 1, in Study 2 we selected three students from the whole-number strategy cluster and another three students from the representation strategy cluster and implemented a differentiated strategic training intervention within a multiple-baseline design. Results showed that both groups of students transitioned from less advanced to more advanced strategies and improved their problem-solving accuracy during the posttest, the maintenance test, and the generalization test. © Hammill Institute on Disabilities 2014.
Gesch, Dean
2007-01-01
The ready availability of high-resolution, high-accuracy elevation data proved valuable for development of topographybased products to determine rough estimates of the inundation of New Orleans, La., from Hurricane Katrina. Because of its high level of spatial detail and vertical accuracy of elevation measurements, light detection and ranging (lidar) remote sensing is an excellent mapping technology for use in low-relief hurricane-prone coastal areas.
McRobert, Allistair Paul; Causer, Joe; Vassiliadis, John; Watterson, Leonie; Kwan, James; Williams, Mark A
2013-06-01
It is well documented that adaptations in cognitive processes with increasing skill levels support decision making in multiple domains. We examined skill-based differences in cognitive processes in emergency medicine physicians, and whether performance was significantly influenced by the removal of contextual information related to a patient's medical history. Skilled (n=9) and less skilled (n=9) emergency medicine physicians responded to high-fidelity simulated scenarios under high- and low-context information conditions. Skilled physicians demonstrated higher diagnostic accuracy irrespective of condition, and were less affected by the removal of context-specific information compared with less skilled physicians. The skilled physicians generated more options, and selected better quality options during diagnostic reasoning compared with less skilled counterparts. These cognitive processes were active irrespective of the level of context-specific information presented, although high-context information enhanced understanding of the patients' symptoms resulting in higher diagnostic accuracy. Our findings have implications for scenario design and the manipulation of contextual information during simulation training.
Blob-level active-passive data fusion for Benthic classification
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady
2012-06-01
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
Digital versus conventional implant impressions for edentulous patients: accuracy outcomes.
Papaspyridakos, Panos; Gallucci, German O; Chen, Chun-Jung; Hanssen, Stijn; Naert, Ignace; Vandenberghe, Bart
2016-04-01
To compare the accuracy of digital and conventional impression techniques for completely edentulous patients and to determine the effect of different variables on the accuracy outcomes. A stone cast of an edentulous mandible with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level impressions. Digital impressions (n = 10) were taken with an intraoral optical scanner (TRIOS, 3shape, Denmark) after connecting polymer scan bodies. For the conventional polyether impressions of the master cast, a splinted and a non-splinted technique were used for implant-level and abutment-level impressions (4 cast groups, n = 10 each). Master casts and conventional impression casts were digitized with an extraoral high-resolution scanner (IScan D103i, Imetric, Courgenay, Switzerland) to obtain digital volumes. Standard tessellation language (STL) datasets from the five groups of digital and conventional impressions were superimposed with the STL dataset from the master cast to assess the 3D (global) deviations. To compare the master cast with digital and conventional impressions at the implant level, analysis of variance (ANOVA) and Scheffe's post hoc test was used, while Wilcoxon's rank-sum test was used for testing the difference between abutment-level conventional impressions. Significant 3D deviations (P < 0.001) were found between Group II (non-splinted, implant level) and control. No significant differences were found between Groups I (splinted, implant level), III (digital, implant level), IV (splinted, abutment level), and V (non-splinted, abutment level) compared with the control. Implant angulation up to 15° did not affect the 3D accuracy of implant impressions (P > 0.001). Digital implant impressions are as accurate as conventional implant impressions. The splinted, implant-level impression technique is more accurate than the non-splinted one for completely edentulous patients, whereas there was no difference in the accuracy at the abutment level. The implant angulation up to 15° did not affect the accuracy of implant impressions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Papaspyridakos, Panos; Hirayama, Hiroshi; Chen, Chun-Jung; Ho, Chung-Han; Chronopoulos, Vasilios; Weber, Hans-Peter
2016-09-01
The aim of this study was to assess the effect of connection type and impression technique on the accuracy of fit of implant-supported fixed complete-arch dental prostheses (IFCDPs). An edentulous mandibular cast with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level baselines. A titanium one-piece framework for an IFCDP was milled at abutment level and used for accuracy of fit measurements. Polyether impressions were made using a splinted and non-splinted technique at the implant and abutment level leading to four test groups, n = 10 each. Hence, four groups of test casts were generated. The impression accuracy was evaluated indirectly by assessing the fit of the IFCDP framework on the generated casts of the test groups, clinically and radiographically. Additionally, the control and all test casts were digitized with a high-resolution reference scanner (IScan D103i, Imetric, Courgenay, Switzerland) and standard tessellation language datasets were generated and superimposed. Potential correlations between the clinical accuracy of fit data and the data from the digital scanning were investigated. To compare the accuracy of casts of the test groups versus the control at the implant and abutment level, Fisher's exact test was used. Of the 10 casts of test group I (implant-level splint), all 10 presented with accurate clinical fit when the framework was seated on its respective cast, while only five of 10 casts of test group II (implant-level non-splint) showed adequate fit. All casts of group III (abutment-level splint) presented with accurate fit, whereas nine of 10 of the casts of test group IV (abutment-level non-splint) were accurate. Significant 3D deviations (P < 0.05) were found between group II and the control. No statistically significant differences were found between groups I, III, and IV compared with the control. Implant connection type (implant level vs. abutment level) and impression technique did affect the 3D accuracy of implant impressions only with the non-splint technique (P < 0.05). For one-piece IFCDPs, the implant-level splinted impression technique showed to be more accurate than the non-splinted approach, whereas at the abutment-level, no difference in the accuracy was found. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A new ultra-high-accuracy angle generator: current status and future direction
NASA Astrophysics Data System (ADS)
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
Rogers, Katherine H; Le, Marina T; Buckels, Erin E; Kim, Mikayla; Biesanz, Jeremy C
2018-02-19
The Dark Tetrad traits (subclinical psychopathy, narcissism, Machiavellianism, and everyday sadism) have interpersonal consequences. At present, however, how these traits are associated with the accuracy and positivity of first impressions is not well understood. The present article addresses three primary questions. First, to what extent are perceiver levels of Dark Tetrad traits associated with differing levels of perceptive accuracy? Second, to what extent are target levels of Dark Tetrad traits associated with differing levels of expressive accuracy? Finally, to what extent can Dark Tetrad traits be differentiated when examining perceptions of and by others? In a round-robin design, undergraduate participants (N = 412) in small groups engaged in brief, naturalistic, unstructured dyadic interactions before providing impressions of their partner. Dark Tetrad traits were associated with being viewed and viewing others less distinctively accurately and more negatively. Interpersonal perceptions that included an individual scoring highly on one of the Dark Tetrad traits differed in important ways from interactions among individuals with more benevolent personalities. Notably, despite the similarities between the Dark Tetrad, traits had unique associations with interpersonal perceptions. © 2018 Wiley Periodicals, Inc.
Navigating highly elliptical earth orbiters with simultaneous VLBI from orthogonal baseline pairs
NASA Technical Reports Server (NTRS)
Frauenholz, Raymond B.
1986-01-01
Navigation strategies for determining highly elliptical orbits with VLBI are described. The predicted performance of wideband VLBI and Delta VLBI measurements obtained by orthogonal baseline pairs are compared for a 16-hr equatorial orbit. It is observed that the one-sigma apogee position accuracy improves two orders of magnitude to the meter level when Delta VLBI measurements are added to coherent Doppler and range, and the simpler VLBI strategy provides nearly the same orbit accuracy. The effects of differential measurement noise and acquisition geometry on orbit accuracy are investigated. The data reveal that quasar position uncertainty limits the accuracy of wideband Delta VLBI measurements, and that polar motion and baseline uncertainties and offsets between station clocks affect the wideband VLBI data. It is noted that differential one-way range (DOR) has performance nearly equal to that of the more complex Delta DOR and is recommended for use on spacecraft in high elliptical orbits.
High accuracy wavelength calibration for a scanning visible spectrometer.
Scotti, Filippo; Bell, Ronald E
2010-10-01
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
El Shayeb, Mohamed; Topfer, Leigh-Ann; Stafinski, Tania; Pawluk, Lawrence; Menon, Devidas
2014-01-01
Background: Greater awareness of sleep-disordered breathing and rising obesity rates have fueled demand for sleep studies. Sleep testing using level 3 portable devices may expedite diagnosis and reduce the costs associated with level 1 in-laboratory polysomnography. We sought to assess the diagnostic accuracy of level 3 testing compared with level 1 testing and to identify the appropriate patient population for each test. Methods: We conducted a systematic review and meta-analysis of comparative studies of level 3 versus level 1 sleep tests in adults with suspected sleep-disordered breathing. We searched 3 research databases and grey literature sources for studies that reported on diagnostic accuracy parameters or disease management after diagnosis. Two reviewers screened the search results, selected potentially relevant studies and extracted data. We used a bivariate mixed-effects binary regression model to estimate summary diagnostic accuracy parameters. Results: We included 59 studies involving a total of 5026 evaluable patients (mostly patients suspected of having obstructive sleep apnea). Of these, 19 studies were included in the meta-analysis. The estimated area under the receiver operating characteristics curve was high, ranging between 0.85 and 0.99 across different levels of disease severity. Summary sensitivity ranged between 0.79 and 0.97, and summary specificity ranged between 0.60 and 0.93 across different apnea–hypopnea cut-offs. We saw no significant difference in the clinical management parameters between patients who underwent either test to receive their diagnosis. Interpretation: Level 3 portable devices showed good diagnostic performance compared with level 1 sleep tests in adult patients with a high pretest probability of moderate to severe obstructive sleep apnea and no unstable comorbidities. For patients suspected of having other types of sleep-disordered breathing or sleep disorders not related to breathing, level 1 testing remains the reference standard. PMID:24218531
High Density Lidar and Orthophotography in UXO Wide Area Assessment (Rev 1)
2007-08-01
The demonstration provided important insights regarding the appropriate uses and confidence levels for both lidar and orthophoto technologies. At...Patterns ................................................................................ 3-34 3.7.18 Lidar and Orthophoto Positional Accuracy...02-D-2008 Page iii CONTENTS (Continued) APPENDICES A Lidar and Orthophoto Positional Accuracy Results B Combining Lidar Data from Multiple
Effects of high altitude and exercise on marksmanship.
Tharion, W J; Hoyt, R W; Marlowe, B E; Cymerman, A
1992-02-01
The effects of exercise and high altitude (3,700 m to 4,300 m) on marksmanship accuracy and sighting time were quantified in 16 experienced marksmen. Subjects dry-fired a disabled rifle equipped with a laser-based system from a free-standing position. The 2.3-cm circular target was at a distance of 5 m. Marksmanship was assessed under the following conditions: 1) at rest at sea level; 2) immediately after a 21-km run/walk ascent from 1,800 m to 4,300 m elevation; 3) at rest during days 1 to 3 at altitude; 4) at rest during days 14 to 16 at altitude; and 5) immediately after a second ascent after 17 d at altitude. Exercise reduced marksmanship accuracy (p less than 0.05) but did not affect sighting time. Acute altitude exposure reduced marksmanship accuracy, and decreased sighting time (p less than 0.05). However, after residence at altitude, accuracy and sighting time at rest returned to sea level values. Exercise and acute altitude exposure had similar but independent detrimental effects on marksmanship.
ERIC Educational Resources Information Center
Koolen, Sophieke; Vissers, Constance Th. W. M.; Hendriks, Angelique W. C. J.; Egger, Jos I. M.; Verhoeven, Ludo
2012-01-01
This study examined the hypothesis of an atypical interaction between attention and language in ASD. A dual-task experiment with three conditions was designed, in which sentences were presented that contained errors requiring attentional focus either at (a) low level, or (b) high level, or (c) both levels of language. Speed and accuracy for error…
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
Leidinger, Petra; Keller, Andreas; Milchram, Lisa; Harz, Christian; Hart, Martin; Werth, Angelika; Lenhof, Hans-Peter; Weinhäusel, Andreas; Keck, Bastian; Wullich, Bernd; Ludwig, Nicole; Meese, Eckart
2015-01-01
Although an increased level of the prostate-specific antigen can be an indication for prostate cancer, other reasons often lead to a high rate of false positive results. Therefore, an additional serological screening of autoantibodies in patients' sera could improve the detection of prostate cancer. We performed protein macroarray screening with sera from 49 prostate cancer patients, 70 patients with benign prostatic hyperplasia and 28 healthy controls and compared the autoimmune response in those groups. We were able to distinguish prostate cancer patients from normal controls with an accuracy of 83.2%, patients with benign prostatic hyperplasia from normal controls with an accuracy of 86.0% and prostate cancer patients from patients with benign prostatic hyperplasia with an accuracy of 70.3%. Combining seroreactivity pattern with a PSA level of higher than 4.0 ng/ml this classification could be improved to an accuracy of 84.1%. For selected proteins we were able to confirm the differential expression by using luminex on 84 samples. We provide a minimally invasive serological method to reduce false positive results in detection of prostate cancer and according to PSA screening to distinguish men with prostate cancer from men with benign prostatic hyperplasia.
Papadopoulos, Emmanuel I; Petraki, Constantina; Gregorakis, Alkiviadis; Chra, Eleni; Fragoulis, Emmanuel G; Scorilas, Andreas
2015-06-01
Renal cell carcinoma (RCC) is the most frequent type of kidney cancer. RCC patients frequently present with arterial hypertension due to various causes, including intrarenal dopamine deficiency. L-DOPA decarboxylase (DDC) is the gene encoding the enzyme that catalyzes the biosynthesis of dopamine in humans. Several studies have shown that the expression levels of DDC are significantly deregulated in cancer. Thus, we herein sought to analyze the mRNA levels of DDC and evaluate their clinical significance in RCC. DDC levels were analyzed in 58 surgically resected RCC tumors and 44 adjacent non-cancerous renal tissue specimens via real-time PCR. Relative levels of DDC were estimated by applying the 2(-ΔΔC)T method, while their diagnostic accuracy and correlation with the clinicopathological features of RCC tumors were assessed by comprehensive statistical analysis. DDC mRNA levels were found to be dramatically downregulated (p<0.001) in RCC tumors, exhibiting remarkable diagnostic accuracy as assessed by ROC curve analysis (AUC: 0.910; p<0.001) and logistic regression (OR: 0.678; p=0.001). Likewise, DDC was found to be differentially expressed between clear cell RCC and the group of non-clear cell subtypes (p=0.001) consisted of papillary and chromophobe RCC specimens. Furthermore, a statistically significant inverse correlation was also observed when the mRNA levels of DDC were analyzed in relation to tumor grade (p=0.049). Our data showed that DDC constitutes a highly promising molecular marker for RCC, exhibiting remarkable diagnostic accuracy and potential to discriminate between clear cell and non-clear cell histological subtypes of RCC. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Zhao, Weichen; Sun, Zhuo; Kong, Song
2016-10-01
Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.
Simulation of fiber optic liquid level sensor demodulation system
NASA Astrophysics Data System (ADS)
Yi, Cong-qin; Luo, Yun; Zhang, Zheng-ping
Measuring liquid level with high accuracy is an urgent requirement. This paper mainly focus on the demodulation system of fiber-optic liquid level sensor based on Fabry-Perot cavity, design and simulate the demodulation system by the single-chip simulation software.
High accuracy position response calibration method for a micro-channel plate ion detector
NASA Astrophysics Data System (ADS)
Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.
2016-11-01
We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.
Gesch, Dean B.
2009-01-01
The importance of sea-level rise in shaping coastal landscapes is well recognized within the earth science community, but as with many natural hazards, communicating the risks associated with sea-level rise remains a challenge. Topography is a key parameter that influences many of the processes involved in coastal change, and thus, up-to-date, high-resolution, high-accuracy elevation data are required to model the coastal environment. Maps of areas subject to potential inundation have great utility to planners and managers concerned with the effects of sea-level rise. However, most of the maps produced to date are simplistic representations derived from older, coarse elevation data. In the last several years, vast amounts of high quality elevation data derived from lidar have become available. Because of their high vertical accuracy and spatial resolution, these lidar data are an excellent source of up-to-date information from which to improve identification and delineation of vulnerable lands. Four elevation datasets of varying resolution and accuracy were processed to demonstrate that the improved quality of lidar data leads to more precise delineation of coastal lands vulnerable to inundation. A key component of the comparison was to calculate and account for the vertical uncertainty of the elevation datasets. This comparison shows that lidar allows for a much more detailed delineation of the potential inundation zone when compared to other types of elevation models. It also shows how the certainty of the delineation of lands vulnerable to a given sea-level rise scenario is much improved when derived from higher resolution lidar data.
Kurasawa, Shintaro; Koyama, Shouhei; Ishizawa, Hiroaki; Fujimoto, Keisaku; Chino, Shun
2017-11-23
This paper describes and verifies a non-invasive blood glucose measurement method using a fiber Bragg grating (FBG) sensor system. The FBG sensor is installed on the radial artery, and the strain (pulse wave) that is propagated from the heartbeat is measured. The measured pulse wave signal was used as a collection of feature vectors for multivariate analysis aiming to determine the blood glucose level. The time axis of the pulse wave signal was normalized by two signal processing methods: the shortest-time-cut process and 1-s-normalization process. The measurement accuracy of the calculated blood glucose level was compared with the accuracy of these signal processing methods. It was impossible to calculate a blood glucose level exceeding 200 mg/dL in the calibration curve that was constructed by the shortest-time-cut process. In the 1-s-normalization process, the measurement accuracy of the blood glucose level was improved, and a blood glucose level exceeding 200 mg/dL could be calculated. By verifying the loading vector of each calibration curve to calculate the blood glucose level with a high measurement accuracy, we found the gradient of the peak of the pulse wave at the acceleration plethysmogram greatly affected.
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
NASA Astrophysics Data System (ADS)
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.
Empathic Embarrassment Accuracy in Autism Spectrum Disorder.
Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G
2015-06-01
Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
High-density marker imputation accuracy in sixteen French cattle breeds.
Hozé, Chris; Fouilloux, Marie-Noëlle; Venot, Eric; Guillaume, François; Dassonneville, Romain; Fritz, Sébastien; Ducrocq, Vincent; Phocas, Florence; Boichard, Didier; Croiseau, Pascal
2013-09-03
Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777,609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations.
High-density marker imputation accuracy in sixteen French cattle breeds
2013-01-01
Background Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777 609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Methods Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Results Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. Conclusion In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations. PMID:24004563
Belgiu, Mariana; Dr Guţ, Lucian
2014-10-01
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
Elkovitch, Natasha; Viljoen, Jodi L; Scalora, Mario J; Ullman, Daniel
2008-01-01
As courts often rely on clinicians when differentiating between sexually abusive youth at a low versus high risk of reoffense, understanding factors that contribute to accuracy in assessment of risk is imperative. The present study built on existing research by examining (1) the accuracy of clinical judgments of risk made after completing risk assessment instruments, (2) whether instrument-informed clinical judgments made with a high degree of confidence are associated with greater accuracy, and (3) the risk assessment instruments and subscales most predictive of clinical judgments. Raters assessed each youth's (n = 166) risk of reoffending after completing the SAVRY and J-SOAP-II. Raters were not able to predict detected cases of either sexual recidivism or nonsexual violent recidivism above chance, and a high degree of rater confidence was not associated with higher levels of accuracy. Total scores on the J-SOAP-II were predictive of instrument-informed clinical judgments of sexual risk, and total scores on the SAVRY of nonsexual risk.
Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.
2013-01-01
Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278
Simopoulos, Thomas T; Manchikanti, Laxmaiah; Gupta, Sanjeeva; Aydin, Steve M; Kim, Chong Hwan; Solanki, Daneshvari; Nampiaparampil, Devi E; Singh, Vijay; Staats, Peter S; Hirsch, Joshua A
2015-01-01
The sacroiliac joint is well known as a cause of low back and lower extremity pain. Prevalence estimates are 10% to 25% in patients with persistent axial low back pain without disc herniation, discogenic pain, or radiculitis based on multiple diagnostic studies and systematic reviews. However, at present there are no definitive management options for treating sacroiliac joint pain. To evaluate the diagnostic accuracy and therapeutic effectiveness of sacroiliac joint interventions. A systematic review of the diagnostic accuracy and therapeutic effectiveness of sacroiliac joint interventions. The available literature on diagnostic and therapeutic sacroiliac joint interventions was reviewed. The quality assessment criteria utilized were the Quality Appraisal of Reliability Studies (QAREL) checklist for diagnostic accuracy studies, Cochrane review criteria to assess sources of risk of bias, and Interventional Pain Management Techniques-Quality Appraisal of Reliability and Risk of Bias Assessment (IPM-QRB) criteria for randomized therapeutic trials and Interventional Pain Management Techniques-Quality Appraisal of Reliability and Risk of Bias Assessment for Nonrandomized Studies (IPM-QRBNR) for observational therapeutic assessments. The level of evidence was based on a best evidence synthesis with modified grading of qualitative evidence from Level I to Level V. Data sources included relevant literature published from 1966 through March 2015 that were identified through searches of PubMed and EMBASE, manual searches of the bibliographies of known primary and review articles, and all other sources. For the diagnostic accuracy assessment, and for the therapeutic modalities, the primary outcome measure of pain relief and improvement in functional status were utilized. A total of 11 diagnostic accuracy studies and 14 therapeutic studies were included. The evidence for diagnostic accuracy is Level II for dual diagnostic blocks with at least 70% pain relief as the criterion standard and Level III evidence for single diagnostic blocks with at least 75% pain relief as the criterion standard. The evidence for cooled radiofrequency neurotomy in managing sacroiliac joint pain is Level II to III. The evidence for conventional radiofrequency neurotomy, intraarticular steroid injections, and periarticular injections with steroids or botulinum toxin is limited: Level III or IV. The limitations of this systematic review include inconsistencies in diagnostic accuracy studies with a paucity of high quality, replicative, and consistent literature. The limitations for therapeutic interventions include variations in technique, variable diagnostic standards for inclusion criteria, and variable results. The evidence for the accuracy of diagnostic and therapeutic effectiveness of sacroiliac joint interventions varied from Level II to Level IV.
Lemieux, Sébastien
2006-08-25
The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.
Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification
Rajagopal, Gayathri; Palaniswamy, Ramamoorthy
2015-01-01
This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813
Sauter, Andreas P; Kopp, Felix K; Münzel, Daniela; Dangelmaier, Julia; Renz, Martin; Renger, Bernhard; Braren, Rickmer; Fingerle, Alexander A; Rummeny, Ernst J; Noël, Peter B
2018-05-01
Evaluation of the influence of iterative reconstruction, tube settings and patient habitus on the accuracy of iodine quantification with dual-layer spectral CT (DL-CT). A CT abdomen phantom with different extension rings and four iodine inserts (1, 2, 5 and 10 mg/ml) was scanned on a DL-CT. The phantom was scanned with tube-voltages of 120 and 140 kVp and CTDI vol of 2.5, 5, 10 and 20 mGy. Reconstructions were performed for eight levels of iterative reconstruction (i0-i7). Diagnostic dose levels are classified depending on patient-size and radiation dose. Measurements of iodine concentration showed accurate and reliable results. Taking all CTDI vol -levels into account, the mean absolute percentage difference (MAPD) showed less accuracy for low CTDI vol -levels (2.5 mGy: 34.72%) than for high CTDI vol -levels (20 mGy: 5.89%). At diagnostic dose levels, accurate quantification of iodine was possible (MAPD 3.38%). Level of iterative reconstruction did not significantly influence iodine measurements. Iodine quantification worked more accurately at a tube voltage of 140 kVp. Phantom size had a considerable effect only at low-dose-levels; at diagnostic dose levels the effect of phantom size decreased (MAPD <5% for all phantom sizes). With DL-CT, even low iodine concentrations can be accurately quantified. Accuracies are higher when diagnostic radiation doses are employed. Copyright © 2018 Elsevier B.V. All rights reserved.
Kunstman, Jonathan W.; Clerkin, Elise M.; Palmer, Kateyln; Peters, M. Taylar; Dodd, Dorian R.; Smith, April R.
2015-01-01
Background and Objectives This study tested whether relatively low levels of interoceptive accuracy (IAcc) are associated with body dysmorphic disorder (BDD) symptoms. Additionally, given research indicating that power attunes individuals to their internal states, we sought to determine if state interoceptive accuracy could be improved through an experimental manipulation of power. Method Undergraduate women (N = 101) completed a baseline measure of interoceptive accuracy and then were randomized to a power or control condition. Participants were primed with power or a neutral control topic and then completed a post-manipulation measure of state IAcc. Trait BDD symptoms were assessed with a self-report measure. Results Controlling for baseline IAcc, within the control condition, there was a significant inverse relationship between trait BDD symptoms and interoceptive accuracy. Continuing to control for baseline IAcc, within the power condition, there was not a significant relationship between trait BDD symptoms and IAcc, suggesting that power may have attenuated this relationship. At high levels of BDD symptomology, there was also a significant simple effect of experimental condition, such that participants in the power (vs. control) condition had better interoceptive accuracy. These results provide initial evidence that power may positively impact interoceptive accuracy among those with high levels of BDD symptoms. Limitations This cross-sectional study utilized a demographically homogenous sample of women that reflected a broad range of symptoms; thus, although there were a number of participants reporting elevated BDD symptoms, these findings might not generalize to other populations or clinical samples. Conclusions . This study provides the first direct test of the relationship between trait BDD symptoms and IAcc, and provides preliminary evidence that among those with severe BDD symptoms, power may help connect individuals with their internal states. Future research testing the mechanisms linking BDD symptoms with IAcc, as well as how individuals can better connect with their internal experiences is needed. PMID:26295932
Kunstman, Jonathan W; Clerkin, Elise M; Palmer, Kateyln; Peters, M Taylar; Dodd, Dorian R; Smith, April R
2016-03-01
This study tested whether relatively low levels of interoceptive accuracy (IAcc) are associated with body dysmorphic disorder (BDD) symptoms. Additionally, given research indicating that power attunes individuals to their internal states, we sought to determine if state interoceptive accuracy could be improved through an experimental manipulation of power.. Undergraduate women (N = 101) completed a baseline measure of interoceptive accuracy and then were randomized to a power or control condition. Participants were primed with power or a neutral control topic and then completed a post-manipulation measure of state IAcc. Trait BDD symptoms were assessed with a self-report measure. Controlling for baseline IAcc, within the control condition, there was a significant inverse relationship between trait BDD symptoms and interoceptive accuracy. Continuing to control for baseline IAcc, within the power condition, there was not a significant relationship between trait BDD symptoms and IAcc, suggesting that power may have attenuated this relationship. At high levels of BDD symptomology, there was also a significant simple effect of experimental condition, such that participants in the power (vs. control) condition had better interoceptive accuracy. These results provide initial evidence that power may positively impact interoceptive accuracy among those with high levels of BDD symptoms.. This cross-sectional study utilized a demographically homogenous sample of women that reflected a broad range of symptoms; thus, although there were a number of participants reporting elevated BDD symptoms, these findings might not generalize to other populations or clinical samples. This study provides the first direct test of the relationship between trait BDD symptoms and IAcc, and provides preliminary evidence that among those with severe BDD symptoms, power may help connect individuals with their internal states. Future research testing the mechanisms linking BDD symptoms with IAcc, as well as how individuals can better connect with their internal experiences is needed.. Copyright © 2015 Elsevier Ltd. All rights reserved.
High-precision radiometric tracking for planetary approach and encounter in the inner solar system
NASA Technical Reports Server (NTRS)
Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.
1989-01-01
The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.
NASA Astrophysics Data System (ADS)
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Daniel J
2008-01-01
Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less
Iconic memory requires attention
Persuh, Marjan; Genzer, Boris; Melara, Robert D.
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389
Iconic memory requires attention.
Persuh, Marjan; Genzer, Boris; Melara, Robert D
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
Comprehensive and Practical Vision System for Self-Driving Vehicle Lane-Level Localization.
Du, Xinxin; Tan, Kok Kiong
2016-05-01
Vehicle lane-level localization is a fundamental technology in autonomous driving. To achieve accurate and consistent performance, a common approach is to use the LIDAR technology. However, it is expensive and computational demanding, and thus not a practical solution in many situations. This paper proposes a stereovision system, which is of low cost, yet also able to achieve high accuracy and consistency. It integrates a new lane line detection algorithm with other lane marking detectors to effectively identify the correct lane line markings. It also fits multiple road models to improve accuracy. An effective stereo 3D reconstruction method is proposed to estimate vehicle localization. The estimation consistency is further guaranteed by a new particle filter framework, which takes vehicle dynamics into account. Experiment results based on image sequences taken under different visual conditions showed that the proposed system can identify the lane line markings with 98.6% accuracy. The maximum estimation error of the vehicle distance to lane lines is 16 cm in daytime and 26 cm at night, and the maximum estimation error of its moving direction with respect to the road tangent is 0.06 rad in daytime and 0.12 rad at night. Due to its high accuracy and consistency, the proposed system can be implemented in autonomous driving vehicles as a practical solution to vehicle lane-level localization.
de Albuquerque, Priscila Maria Nascimento Martins; de Alencar, Geisa Guimarães; de Oliveira, Daniela Araújo; de Siqueira, Gisela Rocha
2018-01-01
The aim of this study was to examine and interpret the concordance, accuracy, and reliability of photogrammetric protocols available in the literature for evaluating cervical lordosis in an adult population aged 18 to 59 years. A systematic search of 6 electronic databases (MEDLINE via PubMed, LILACS, CINAHL, Scopus, ScienceDirect, and Web of Science) located studies that assessed the reliability and/or concordance and/or accuracy of photogrammetric protocols for evaluating cervical lordosis, compared with radiography. Articles published through April 2016 were selected. Two independent reviewers used a critical appraisal tool (QUADAS and QAREL) to assess the quality of the selected studies. Two studies were included in the review and had high levels of reliability (intraclass correlation coefficient: 0.974-0.98). Only 1 study assessed the concordance between the methods, which was calculated using Pearson's correlation coefficient. To date, the accuracy of photogrammetry has not been investigated thoroughly. We encountered no study in the literature that investigated the accuracy of photogrammetry in diagnosing hyperlordosis of cervical spine. However, both current studies report high levels of intra- and interrater reliability. To increase the level of evidence of photogrammetry in the evaluation of cervical lordosis, it is necessary to conduct further studies using a larger sample to increase the external validity of the findings. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Sulaiman, M.; El-Shafie, A.; Karim, O.; Basri, H.
2011-10-01
Flood forecasting models are a necessity, as they help in planning for flood events, and thus help prevent loss of lives and minimize damage. At present, artificial neural networks (ANN) have been successfully applied in river flow and water level forecasting studies. ANN requires historical data to develop a forecasting model. However, long-term historical water level data, such as hourly data, poses two crucial problems in data training. First is that the high volume of data slows the computation process. Second is that data training reaches its optimal performance within a few cycles of data training, due to there being a high volume of normal water level data in the data training, while the forecasting performance for high water level events is still poor. In this study, the zoning matching approach (ZMA) is used in ANN to accurately monitor flood events in real time by focusing the development of the forecasting model on high water level zones. ZMA is a trial and error approach, where several training datasets using high water level data are tested to find the best training dataset for forecasting high water level events. The advantage of ZMA is that relevant knowledge of water level patterns in historical records is used. Importantly, the forecasting model developed based on ZMA successfully achieves high accuracy forecasting results at 1 to 3 h ahead and satisfactory performance results at 6 h. Seven performance measures are adopted in this study to describe the accuracy and reliability of the forecasting model developed.
ERIC Educational Resources Information Center
Furman, Orit; Mendelsohn, Avi; Dudai, Yadin
2012-01-01
We took snapshots of human brain activity with fMRI during retrieval of realistic episodic memory over several months. Three groups of participants were scanned during a memory test either hours, weeks, or months after viewing a documentary movie. High recognition accuracy after hours decreased after weeks and remained at similar levels after…
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
ERIC Educational Resources Information Center
Morris, Darrell; Pennell, Ashley M.; Perney, Jan; Trathen, Woodrow
2018-01-01
This study compared reading rate to reading fluency (as measured by a rating scale). After listening to first graders read short passages, we assigned an overall fluency rating (low, average, or high) to each reading. We then used predictive discriminant analyses to determine which of five measures--accuracy, rate (objective); accuracy, phrasing,…
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
Thematic accuracy of the NLCD 2001 land cover for the conterminous United States
Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, Collin G.
2010-01-01
The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.
Land cover classification of VHR airborne images for citrus grove identification
NASA Astrophysics Data System (ADS)
Amorós López, J.; Izquierdo Verdiguier, E.; Gómez Chova, L.; Muñoz Marí, J.; Rodríguez Barreiro, J. Z.; Camps Valls, G.; Calpe Maravilla, J.
Managing land resources using remote sensing techniques is becoming a common practice. However, data analysis procedures should satisfy the high accuracy levels demanded by users (public or private companies and governments) in order to be extensively used. This paper presents a multi-stage classification scheme to update the citrus Geographical Information System (GIS) of the Comunidad Valenciana region (Spain). Spain is the first citrus fruit producer in Europe and the fourth in the world. In particular, citrus fruits represent 67% of the agricultural production in this region, with a total production of 4.24 million tons (campaign 2006-2007). The citrus GIS inventory, created in 2001, needs to be regularly updated in order to monitor changes quickly enough, and allow appropriate policy making and citrus production forecasting. Automatic methods are proposed in this work to facilitate this update, whose processing scheme is summarized as follows. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution aerial images (0.5 m). Next, several automatic classifiers (decision trees, artificial neural networks, and support vector machines) are trained and combined to improve the final classification accuracy. Finally, the citrus GIS is automatically updated if a high enough level of confidence, based on the agreement between classifiers, is achieved. This is the case for 85% of the parcels and accuracy results exceed 94%. The remaining parcels are classified by expert photo-interpreters in order to guarantee the high accuracy demanded by policy makers.
Direct drive digital servo press with high parallel control
NASA Astrophysics Data System (ADS)
Murata, Chikara; Yabe, Jun; Endou, Junichi; Hasegawa, Kiyoshi
2013-12-01
Direct drive digital servo press has been developed as the university-industry joint research and development since 1998. On the basis of this result, 4-axes direct drive digital servo press has been developed and in the market on April of 2002. This servo press is composed of 1 slide supported by 4 ball screws and each axis has linearscale measuring the position of each axis with high accuracy less than μm order level. Each axis is controlled independently by servo motor and feedback system. This system can keep high level parallelism and high accuracy even with high eccentric load. Furthermore the 'full stroke full power' is obtained by using ball screws. Using these features, new various types of press forming and stamping have been obtained by development and production. The new stamping and forming methods are introduced and 'manufacturing' need strategy of press forming with high added value and also the future direction of press forming are also introduced.
Accuracy of Handheld Blood Glucose Meters at High Altitude
de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.
2010-01-01
Background Due to increasing numbers of people with diabetes taking part in extreme sports (e.g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior studies reported bias in blood glucose measurements using different BGMs at high altitude. We hypothesized that glucose-oxidase based BGMs are more influenced by the lower atmospheric oxygen pressure at altitude than glucose dehydrogenase based BGMs. Methodology/Principal Findings Glucose measurements at simulated altitude of nine BGMs (six glucose dehydrogenase and three glucose oxidase BGMs) were compared to glucose measurement on a similar BGM at sea level and to a laboratory glucose reference method. Venous blood samples of four different glucose levels were used. Moreover, two glucose oxidase and two glucose dehydrogenase based BGMs were evaluated at different altitudes on Mount Kilimanjaro. Accuracy criteria were set at a bias <15% from reference glucose (when >6.5 mmol/L) and <1 mmol/L from reference glucose (when <6.5 mmol/L). No significant difference was observed between measurements at simulated altitude and sea level for either glucose oxidase based BGMs or glucose dehydrogenase based BGMs as a group phenomenon. Two GDH based BGMs did not meet set performance criteria. Most BGMs are generally overestimating true glucose concentration at high altitude. Conclusion At simulated high altitude all tested BGMs, including glucose oxidase based BGMs, did not show influence of low atmospheric oxygen pressure. All BGMs, except for two GDH based BGMs, performed within predefined criteria. At true high altitude one GDH based BGM had best precision and accuracy. PMID:21103399
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
Raabe, E.A.; Stumpf, R.P.; Marth, N.J.; Shrestha, R.L.
1996-01-01
Elevation differences on the order of 10 cm within Florida's marsh system influence major variations in tidal flooding and in the associated plant communities. This low elevation gradient combined with sea level fluctuation of 5-to-10 cm over decadel and longer periods can generate significant alteration and erosion of marsh habitats along the Gulf Coast. Knowledge of precise and accurate elevations in the marsh is critical to the efficient monitoring and management of these habitats. Global positioning system (GPS) technology was employed to establish six new orthometric heights along the Gulf Coast from which kinematic surveys into the marsh interior are conducted. The vertical accuracy achieved using GPS technology was evaluated using two networks with 16 vertical and nine horizontal NGS published high accuracy positions. New positions were occupied near St. Marks National Wildlife Refuge and along the coastline of Levy County and Citrus County. Static surveys were conducted using four Ashtech dual frequency P-code receivers for 45-minute sessions and a data logging rate of 10 seconds. Network vector lengths ranged from 4 to 64 km and, including redundant baselines, totaled over 100 vectors. Analysis includes use of the GEOID93 model with a least squares network adjustment and reference to the National Geodetic Reference System (NGRS). The static surveys show high internal consistency and the desired centimeter-level accuracy is achieved for the local network. Uncertainties for the newly established vertical positions range from 0.8 cm to 1.8 cm at the 95% confidence level. These new positions provide sufficient vertical accuracy to achieve the project objectives of tying marsh surface elevations to long-term water level gauges recording sea level fluctuations along the coast.
NASA Astrophysics Data System (ADS)
Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto
2014-11-01
Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.
The Evaluation of GPS techniques for UAV-based Photogrammetry in Urban Area
NASA Astrophysics Data System (ADS)
Yeh, M. L.; Chou, Y. T.; Yang, L. S.
2016-06-01
The efficiency and high mobility of Unmanned Aerial Vehicle (UAV) made them essential to aerial photography assisted survey and mapping. Especially for urban land use and land cover, that they often changes, and need UAVs to obtain new terrain data and the new changes of land use. This study aims to collect image data and three dimensional ground control points in Taichung city area with Unmanned Aerial Vehicle (UAV), general camera and Real-Time Kinematic with positioning accuracy down to centimetre. The study area is an ecological park that has a low topography which support the city as a detention basin. A digital surface model was also built with Agisoft PhotoScan, and there will also be a high resolution orthophotos. There will be two conditions for this study, with or without ground control points and both were discussed and compared for the accuracy level of each of the digital surface models. According to check point deviation estimate, the model without ground control points has an average two-dimension error up to 40 centimeter, altitude error within one meter. The GCP-free RTK-airborne approach produces centimeter-level accuracy with excellent to low risk to the UAS operators. As in the case of the model with ground control points, the accuracy of x, y, z coordinates has gone up 54.62%, 49.07%, and 87.74%, and the accuracy of altitude has improved the most.
Validation of maternal reports for low birthweight and preterm birth indicators in rural Nepal.
Chang, Karen T; Mullany, Luke C; Khatry, Subarna K; LeClerq, Steven C; Munos, Melinda K; Katz, Joanne
2018-06-01
Tracking progress towards global newborn health targets depends largely on maternal reported data collected through large, nationally representative surveys. We evaluated the validity, across a range of recall period lengths (1 to 24 months post-delivery), of maternal report of birthweight, birth size and length of pregnancy. We compared maternal reports to reference standards of birthweights measured within 72 hours of delivery and gestational age generated from reported first day of the last menstrual period (LMP) prospectively collected as part of a population-based study (n = 1502). We calculated sensitivity, specificity, area the under the receiver operating curve (AUC) as a measure of individual-level accuracy, and the inflation factor (IF) to quantify population-level bias for each indicator. We assessed if length of recall period modified accuracy by stratifying measurements across time bins and using a modified Poisson regression with robust error variance to estimate the relative risk (RR) of correctly classifying newborns as low birthweight (LBW) or preterm, adjusting for child sex, place of delivery, maternal age, maternal education, parity, and ethnicity. The LBW indicator using maternally reported birthweight in grams had low individual-level accuracy (AUC = 0.69) and high population-level bias (inflation factor IF = 0.62). LBW using maternally reported birth size and the preterm birth indicator had lower individual-level accuracy (AUC = 0.58 and 0.56, respectively) and higher population-level bias (IF = 0.28 and 0.35, respectively) up to 24 months following birth. Length of recall time did not affect accuracy of LBW indicators. For the preterm birth indicator, accuracy did not change with length of recall up to 20 months after birth and improved slightly beyond 20 months. The use of maternal reports may underestimate and bias indicators for LBW and preterm birth. In settings with high prevalence of LBW and preterm births, these indicators generated from maternal reports may be more vulnerable to misclassification. In populations where an important proportion of births occur at home or where weight is not routinely measured, mothers perhaps place less importance on remembering size at birth. Further work is needed to explore whether these conclusions on the validity of maternal reports hold in similar rural and low-income settings.
COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy
NASA Astrophysics Data System (ADS)
Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun
2014-09-01
In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Run; Su, Peng; Burge, James H.
The Software Configurable Optical Test System (SCOTS) uses deflectometry to measure surface slopes of general optical shapes without the need for additional null optics. Careful alignment of test geometry and calibration of inherent system error improve the accuracy of SCOTS to a level where it competes with interferometry. We report a SCOTS surface measurement of an off-axis superpolished elliptical x-ray mirror that achieves <1 nm<1 nm root-mean-square accuracy for the surface measurement with low-order term included.
Kang, Lin; Li, Nan; Li, Ping; Zhou, Yang; Gao, Shan; Gao, Hongwei; Xin, Wenwen; Wang, Jinglin
2017-04-01
Salmonella can cause global foodborne illnesses in humans and many animals. The current diagnostic gold standard used for detecting Salmonella infection is microbiological culture followed by serological confirmation tests. However, these methods are complicated and time-consuming. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) analysis offers some advantages in rapid identification, for example, simple and fast sample preparation, fast and automated measurement, and robust and reliable identification up to genus and species levels, possibly even to the strain level. In this study, we established a reference database for species identification using whole-cell MALDI-TOF MS; the database consisted of 12 obtained main spectra of the Salmonella culture collection strains belonged to seven serotypes. Eighty-two clinical isolates of Salmonella were identified using established database, and partial 16S rDNA gene sequencing and serological method were used as comparison. We found that MALDI-TOF mass spectrometry provided high accuracy in identification of Salmonella at species level but was limited to type or subtype Salmonella serovars. We also tried to find serovar-specific biomarkers and failed. Our study demonstrated that (a) MALDI-TOF MS was suitable for identification of Salmonella at species level with high accuracy and (b) that MALDI-TOF MS method presented in this study was not useful for serovar assignment of Salmonella currently, because of its low matching with serological method and (c) MALDI-TOF MS method presented in this study was not suitable to subtype S. typhimurium because of its low discriminatory ability.
Wang, Dean; Jayakar, Rohit G; Leong, Natalie L; Leathers, Michael P; Williams, Riley J; Jones, Kristofer J
2017-04-01
Objective Patients commonly use the Internet to obtain their health-related information. The purpose of this study was to investigate the quality, accuracy, and readability of online patient resources for the management of articular cartilage defects. Design Three search terms ("cartilage defect," "cartilage damage," "cartilage injury") were entered into 3 Internet search engines (Google, Bing, Yahoo). The first 25 websites from each search were collected and reviewed. The quality and accuracy of online information were independently evaluated by 3 reviewers using predetermined scoring criteria. The readability was evaluated using the Flesch-Kincaid (FK) grade score. Results Fifty-three unique websites were evaluated. Quality ratings were significantly higher in websites with a FK score >11 compared to those with a score of ≤11 ( P = 0.021). Only 10 websites (19%) differentiated between focal cartilage defects and diffuse osteoarthritis. Of these, 7 (70%) were elicited using the search term "cartilage defect" ( P = 0.038). The average accuracy of the websites was high (11.7 out of maximum 12), and the average FK grade level (13.4) was several grades higher than the recommended level for readable patient education material (eighth grade level). Conclusions The quality and readability of online patient resources for articular cartilage defects favor those with a higher level of education. Additionally, the majority of these websites do not distinguish between focal chondral defects and diffuse osteoarthritis, which can fail to provide appropriate patient education and guidance for available treatment. Clinicians should help guide patients toward high-quality, accurate, and readable online patient education material.
DJ-1 is a reliable serum biomarker for discriminating high-risk endometrial cancer.
Di Cello, Annalisa; Di Sanzo, Maddalena; Perrone, Francesca Marta; Santamaria, Gianluca; Rania, Erika; Angotti, Elvira; Venturella, Roberta; Mancuso, Serafina; Zullo, Fulvio; Cuda, Giovanni; Costanzo, Francesco
2017-06-01
New reliable approaches to stratify patients with endometrial cancer into risk categories are highly needed. We have recently demonstrated that DJ-1 is overexpressed in endometrial cancer, showing significantly higher levels both in serum and tissue of patients with high-risk endometrial cancer compared with low-risk endometrial cancer. In this experimental study, we further extended our observation, evaluating the role of DJ-1 as an accurate serum biomarker for high-risk endometrial cancer. A total of 101 endometrial cancer patients and 44 healthy subjects were prospectively recruited. DJ-1 serum levels were evaluated comparing cases and controls and, among endometrial cancer patients, between high- and low-risk patients. The results demonstrate that DJ-1 levels are significantly higher in cases versus controls and in high- versus low-risk patients. The receiver operating characteristic curve analysis shows that DJ-1 has a very good diagnostic accuracy in discriminating endometrial cancer patients versus controls and an excellent accuracy in distinguishing, among endometrial cancer patients, low- from high-risk cases. DJ-1 sensitivity and specificity are the highest when high- and low-risk patients are compared, reaching the value of 95% and 99%, respectively. Moreover, DJ-1 serum levels seem to be correlated with worsening of the endometrial cancer grade and histotype, making it a reliable tool in the preoperative decision-making process.
NASA Astrophysics Data System (ADS)
Ito, Shigenobu; Yukita, Kazuto; Goto, Yasuyuki; Ichiyanagi, Katsuhiro; Nakano, Hiroyuki
By the development of industry, in recent years; dependence to electric energy is growing year by year. Therefore, reliable electric power supply is in need. However, to stock a huge amount of electric energy is very difficult. Also, there is a necessity to keep balance between the demand and supply, which changes hour after hour. Consequently, to supply the high quality and highly dependable electric power supply, economically, and with high efficiency, there is a need to forecast the movement of the electric power demand carefully in advance. And using that forecast as the source, supply and demand management plan should be made. Thus load forecasting is said to be an important job among demand investment of electric power companies. So far, forecasting method using Fuzzy logic, Neural Net Work, Regression model has been suggested for the development of forecasting accuracy. Those forecasting accuracy is in a high level. But to invest electric power in higher accuracy more economically, a new forecasting method with higher accuracy is needed. In this paper, to develop the forecasting accuracy of the former methods, the daily peak load forecasting method using the weather distribution of highest and lowest temperatures, and comparison value of each nearby date data is suggested.
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)
2000-01-01
In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).
Theoferometer for High Accuracy Optical Alignment and Metrology
NASA Technical Reports Server (NTRS)
Toland, Ronald; Leviton, Doug; Koterba, Seth
2004-01-01
The accurate measurement of the orientation of optical parts and systems is a pressing problem for upcoming space missions, such as stellar interferometers, requiring the knowledge and maintenance of positions to the sub-arcsecond level. Theodolites, the devices commonly used to make these measurements, cannot provide the needed level of accuracy. This paper describes the design, construction, and testing of an interferometer system to fill the widening gap between future requirements and current capabilities. A Twyman-Green interferometer mounted on a 2 degree of freedom rotation stage is able to obtain sub-arcsecond, gravity-referenced tilt measurements of a sample alignment cube. Dubbed a 'theoferometer,' this device offers greater ease-of-use, accuracy, and repeatability than conventional methods, making it a suitable 21st-century replacement for the theodolite.
A Multitemporal, Multisensor Approach to Mapping the Canadian Boreal Forest
NASA Astrophysics Data System (ADS)
Reith, Ernest
The main anthropogenic source of CO2 emissions is the combustion of fossil fuels, while the clearing and burning of forests contribute significant amounts as well. Vegetation represents a major reservoir for terrestrial carbon stocks, and improving our ability to inventory vegetation will enhance our understanding of the impacts of land cover and climate change on carbon stocks and fluxes. These relationships may be an indication of a series of troubling biosphere-atmospheric feedback mechanisms that need to be better understood and modeled. Valuable land cover information can be provided to the global climate change modeling community using advanced remote sensing capabilities such as Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR). Individually and synergistically, data were successfully used to characterize the complex nature of the Canadian boreal forest land cover types. The multiple endmember spectral mixture analysis process was applied against seasonal AVIRIS data to produce species-level vegetated land cover maps of two study sites in the Canadian boreal forest: Old Black Spruce (OBS) and Old Jack Pine (OJP). The highest overall accuracy was assessed to be at least 66% accurate to the available reference map, providing evidence that high-quality, species-level land cover mapping of the Canadian boreal forest is achievable at accuracy levels greater than other previous research efforts in the region. Backscatter information from multichannel, polarimetric SAR utilizing a binary decision tree-based classification technique methodology was moderately successfully applied to AIRSAR to produce maps of the boreal land cover types at both sites, with overall accuracies at least 59%. A process, centered around noise whitening and principal component analysis features of the minimum noise fraction transform, was implemented to leverage synergies contained within spatially coregistered multitemporal and multisensor AVIRIS and AIRSAR data sets to successfully produce high-accuracy boreal forest land cover maps. Overall land cover map accuracies of 78% and 72% were assessed for OJP and OBS sites, respectively, for either seasonal or multitemporal data sets. High individual land cover accuracies appeared to be independent of site, season, or multisensor combination in the minimum-noise fraction-based approach.
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
NASA Astrophysics Data System (ADS)
Zhang, L.; Cong, Y.; Wu, C.; Bai, C.; Wu, C.
2017-08-01
The recording of Architectural heritage information is the foundation of research, conservation, management, and the display of architectural heritage. In other words, the recording of architectural heritage information supports heritage research, conservation, management and architectural heritage display. What information do we record and collect and what technology do we use for information recording? How do we determine the level of accuracy required when recording architectural information? What method do we use for information recording? These questions should be addressed in relation to the nature of the particular heritage site and the specific conditions for the conservation work. In recent years, with the rapid development of information acquisition technology such as Close Range Photogrammetry, 3D Laser Scanning as well as high speed and high precision Aerial Photogrammetry, many Chinese universities, research institutes and heritage management bureaux have purchased considerable equipment for information recording. However, the lack of understanding of both the nature of architectural heritage and the purpose for which the information is being collected has led to several problems. For example: some institutions when recording architectural heritage information aim solely at high accuracy. Some consider that advanced measuring methods must automatically replace traditional measuring methods. Information collection becomes the purpose, rather than the means, of architectural heritage conservation. Addressing these issues, this paper briefly reviews the history of architectural heritage information recording at the Summer Palace (Yihe Yuan, first built in 1750), Beijing. Using the recording practices at the Summer Palace during the past ten years as examples, we illustrate our achievements and lessons in recording architectural heritage information with regard to the following aspects: (buildings') ideal status desired, (buildings') current status, structural distortion analysis, display, statue restoration and thematic research. Three points will be highlighted in our discussion: 1. Understanding of the heritage is more important than the particular technology used: Architectural heritage information collection and recording are based on an understanding of the value and nature of the architectural heritage. Understanding is the purpose, whereas information collection and recording are the means. 2. Demand determines technology: Collecting and recording architectural heritage information is to serve the needs of heritage research, conservation, management and display. These different needs determine the different technologies that we use. 3. Set the level of accuracy appropriately: For information recording, high accuracy is not the key criterion; rather an appropriate level of accuracy is key. There is considerable deviation between the nominal accuracy of any instrument and the accuracy of any particular measurement.
Aircraft Update Programmes. The Economical Alternative
2000-04-01
will drive the desired level of integration but cost will determine the achieved level. Paper #15 by Christian Dedieu-Eric Loffler ( SAGEM SA) presented...requirements. The SAGEM SA upgrade concept allows one to match specifications ranging from basics performance enhancement, such as high accuracy navigation for
Thematic Accuracy of the NLCD 2001 Land Cover for the Conterminous United States
The land-eover thematic accuracy ofNLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD ...
Introducing Graduate Students to High-Resolution Mass Spectrometry (HRMS) Using a Hands-On Approach
ERIC Educational Resources Information Center
Stock, Naomi L.
2017-01-01
High-resolution mass spectrometry (HRMS) features both high resolution and high mass accuracy and is a powerful tool for the analysis and quantitation of compounds, determination of elemental compositions, and identification of unknowns. A hands-on laboratory experiment for upper-level undergraduate and graduate students to investigate HRMS is…
Adequacy of Using a Three-Item Questionnaire to Determine Zygosity in Chinese Young Twins.
Ho, Connie Suk-Han; Zheng, Mo; Chow, Bonnie Wing-Yin; Wong, Simpson W L; Lim, Cadmon K P; Waye, Mary M Y
2017-03-01
The present study examined the adequacy of a three-item parent questionnaire in determining the zygosity of young Chinese twins and whether there was any association between parent response accuracy and some demographic variables. The sample consisted of 334 pairs of same-sex Chinese twins aged from 3 to 11 years. Three scoring methods, namely the summed score, logistic regression, and decision tree, were employed to evaluate parent response accuracy of twin zygosity based on single nucleotide polymorphism (SNP) information. The results showed that all three methods achieved high level of accuracy ranging from 91 to 93 % which was comparable to the accuracy rates in previous Chinese twin studies. Correlation results also showed that the higher the parents' education level or the family income was, the more likely parents were able to tell correctly that their twins are identical or fraternal. The present findings confirmed the validity of using a three-item parent questionnaire to determine twin zygosity in a Chinese school-aged twin sample.
Leegon, Jeffrey; Aronsky, Dominik
2006-01-01
The healthcare environment is constantly changing. Probabilistic clinical decision support systems need to recognize and incorporate the changing patterns and adjust the decision model to maintain high levels of accuracy. Using data from >75,000 ED patients during a 19-month study period we examined the impact of various static and dynamic training strategies on a decision support system designed to predict hospital admission status for ED patients. Training durations ranged from 1 to 12 weeks. During the study period major institutional changes occurred that affected the system's performance level. The average area under the receiver operating characteristic curve was higher and more stable when longer training periods were used. The system showed higher accuracy when retrained an updated with more recent data as compared to static training period. To adjust for temporal trends the accuracy of decision support systems can benefit from longer training periods and retraining with more recent data.
Effect of high altitude on blood glucose meter performance.
Fink, Kenneth S; Christensen, Dale B; Ellsworth, Allan
2002-01-01
Participation in high-altitude wilderness activities may expose persons to extreme environmental conditions, and for those with diabetes mellitus, euglycemia is important to ensure safe travel. We conducted a field assessment of the precision and accuracy of seven commonly used blood glucose meters while mountaineering on Mount Rainier, located in Washington State (elevation 14,410 ft). At various elevations each climber-subject used the randomly assigned device to measure the glucose level of capillary blood and three different concentrations of standardized control solutions, and a venous sample was also collected for later glucose analysis. Ordinary least squares regression was used to assess the effect of elevation and of other environmental potential covariates on the precision and accuracy of blood glucose meters. Elevation affects glucometer precision (p = 0.08), but becomes less significant (p = 0.21) when adjusted for temperature and relative humidity. The overall effect of elevation was to underestimate glucose levels by approximately 1-2% (unadjusted) for each 1,000 ft gain in elevation. Blood glucose meter accuracy was affected by elevation (p = 0.03), temperature (p < 0.01), and relative humidity (p = 0.04) after adjustment for the other variables. The interaction between elevation and relative humidity had a meaningful but not statistically significant effect on accuracy (p = 0.07). Thus, elevation, temperature, and relative humidity affect blood glucose meter performance, and elevated glucose levels are more greatly underestimated at higher elevations. Further research will help to identify which blood glucose meters are best suited for specific environments.
Protein classification based on text document classification techniques.
Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith
2005-03-01
The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
Iannaccone, Mario; Gili, Sebastiano; De Filippo, Ovidio; D'Amico, Salvatore; Gagliardi, Marco; Bertaina, Maurizio; Mazzilli, Silvia; Rettegno, Sara; Bongiovanni, Federica; Gatti, Paolo; Ugo, Fabrizio; Boccuzzi, Giacomo G; Colangelo, Salvatore; Prato, Silvia; Moretti, Claudio; D'Amico, Maurizio; Noussan, Patrizia; Garbo, Roberto; Hildick-Smith, David; Gaita, Fiorenzo; D'Ascenzo, Fabrizio
2018-01-01
Non-invasive ischaemia tests and biomarkers are widely adopted to rule out acute coronary syndrome in the emergency department. Their diagnostic accuracy has yet to be precisely defined. Medline, Cochrane Library CENTRAL, EMBASE and Biomed Central were systematically screened (start date 1 September 2016, end date 1 December 2016). Prospective studies (observational or randomised controlled trial) comparing functional/imaging or biochemical tests for patients presenting with chest pain to the emergency department were included. Overall, 77 studies were included, for a total of 49,541 patients (mean age 59.9 years). Fast and six-hour highly sensitive troponin T protocols did not show significant differences in their ability to detect acute coronary syndromes, as they reported a sensitivity and specificity of 0.89 (95% confidence interval 0.79-0.94) and 0.84 (0.74-0.9) vs 0.89 (0.78-0.94) and 0.83 (0.70-0.92), respectively. The addition of copeptin to troponin increased sensitivity and reduced specificity, without improving diagnostic accuracy. The diagnostic value of non-invasive tests for patients without troponin increase was tested. Coronary computed tomography showed the highest level of diagnostic accuracy (sensitivity 0.93 (0.81-0.98) and specificity 0.90 (0.93-0.94)), along with myocardial perfusion scintigraphy (sensitivity 0.85 (0.77-0.91) and specificity 0.92 (0.83-0.96)). Stress echography was inferior to coronary computed tomography but non-inferior to myocardial perfusion scintigraphy, while exercise testing showed the lower level of diagnostic accuracy. Fast and six-hour highly sensitive troponin T protocols provide an overall similar level of diagnostic accuracy to detect acute coronary syndrome. Among the non-invasive ischaemia tests for patients without troponin increase, coronary computed tomography and myocardial perfusion scintigraphy showed the highest sensitivity and specificity.
Liew, Jeffrey; Chen, Qi; Hughes, Jan N.
2009-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children. PMID:20161421
Liew, Jeffrey; Chen, Qi; Hughes, Jan N
2010-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children.
Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2017-01-01
The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.
A Comparison of Actual and Perceived Sexual Risk among Older Adults
Syme, Maggie L.; Cohn, Tracy J.; Barnack-Tavlaris, Jessica
2017-01-01
Sexual risk among older adults (OAs) is prevalent, though little is known about the accuracy of sexual risk perceptions. Thus, the aim was to determine the accuracy of sexual risk perceptions among OAs by examining concordance between self-reported sexual risk behaviors and perceived risk. Data on OAs aged 50 to 92 were collected via Amazon Mechanical Turk. Frequency of sexual risk behaviors (past 6 months) were reported along with perceived risk (i.e., STI susceptibility). Accuracy categories (accurate, underestimated, overestimated) were established based on dis/concordance between risk levels (low, moderate, high) and perceived risk (not susceptible, somewhat susceptible, very susceptible). Approximately half of the sample reported engaging in vaginal (49%) and/or oral sex (43%) without a condom in the past 6 months. However, approximately two-thirds of the sample indicated they were “not susceptible” to STIs. No relationship was found between risk behaviors and risk perceptions, and approximately half (48.1%) of OAs in the sample underestimated their risk. Accuracy was found to decrease as sexual risk level increased, with 93.1% of high risk OAs underestimating their risk. Several sexual risk behaviors are prevalent among OAs, particularly men. However, perception of risk is often inaccurate and warrants attention. PMID:26813853
A Comparison of Actual and Perceived Sexual Risk Among Older Adults.
Syme, Maggie L; Cohn, Tracy J; Barnack-Tavlaris, Jessica
2017-02-01
Sexual risk among older adults (OAs) is prevalent, though little is known about the accuracy of sexual risk perceptions. Thus, the aim was to determine the accuracy of sexual risk perceptions among OAs by examining concordance between self-reported sexual risk behaviors and perceived risk. Data on OAs aged 50 to 92 were collected via Amazon.com's Mechanical Turk. Frequency of sexual risk behaviors (past six months) were reported along with perceived risk, namely, sexually transmitted infection (STI) susceptibility. Accuracy categories (accurate, underestimated, overestimated) were established based on dis/concordance between risk levels (low, moderate, high) and perceived risk (not susceptible, somewhat susceptible, very susceptible). Approximately half of the sample reported engaging in vaginal (49%) and/or oral sex (43%) without a condom in the past six months. However, approximately two-thirds of the sample indicated they were "not susceptible" to STIs. No relationship was found between risk behaviors and risk perceptions, and approximately half (48.1%) of OAs in the sample underestimated their risk. Accuracy was found to decrease as sexual risk level increased, with 93.1% of high-risk OAs underestimating their risk. Several sexual risk behaviors are prevalent among OAs, particularly men. However, perception of risk is often inaccurate and warrants attention.
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
Okasha, Hussein Hassan; Ashry, Mahmoud; Imam, Hala M. K.; Ezzat, Reem; Naguib, Mohamed; Farag, Ali H.; Gemeie, Emad H.; Khattab, Hani M.
2015-01-01
Background and Objective: The addition of fine-needle aspiration (FNA) to different imaging modalities has raised the accuracy for diagnosis of cystic pancreatic lesions. We aim to differentiate benign from neoplastic pancreatic cysts by evaluating cyst fluid carcinoembryonic antigen (CEA), carbohydrate antigen (CA19-9), and amylase levels and cytopathological examination, including mucin stain. Patients and Methods: This prospective study included 77 patients with pancreatic cystic lesions. Ultrasound-FNA (US-FNA) or endoscopic ultrasound-FNA (EUS-FNA) was done according to the accessibility of the lesion. The aspirated specimens were subjected to cytopathological examination (including mucin staining), tumor markers (CEA, CA19-9), and amylase level. Results: Cyst CEA value of 279 or more showed high statistical significance in differentiating mucinous from nonmucinous lesions with sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of 73%, 60%, 50%, 80%, and 65%, respectively. Cyst amylase could differentiate between neoplastic and nonneoplastic cysts at a level of 1043 with sensitivity of 58%, specificity of 75%, PPV of 73%, NPV of 60%, and accuracy of 66%. CA19-9 could not differentiate between neoplastic and nonneoplastic cysts. Mucin examination showed a sensitivity of 85%, specificity of 95%, PPV of 92%, NPV of 91%, and accuracy of 91% in differentiating mucinous from non-mucinous lesions. Cytopathological examination showed a sensitivity of 81%, specificity of 94%, PPV of 94%, NPV of 83%, and accuracy of 88%. Conclusion: US or EUS-FNA with analysis of cyst CEA level, CA19-9, amylase, mucin stain, and cytopathological examination increases the diagnostic accuracy of cystic pancreatic lesions. PMID:26020048
Wei, Q; Hu, Y
2009-01-01
The major hurdle for segmenting lung lobes in computed tomographic (CT) images is to identify fissure regions, which encase lobar fissures. Accurate identification of these regions is difficult due to the variable shape and appearance of the fissures, along with the low contrast and high noise associated with CT images. This paper studies the effectiveness of two texture analysis methods - the gray level co-occurrence matrix (GLCM) and the gray level run length matrix (GLRLM) - in identifying fissure regions from isotropic CT image stacks. To classify GLCM and GLRLM texture features, we applied a feed-forward back-propagation neural network and achieved the best classification accuracy utilizing 16 quantized levels for computing the GLCM and GLRLM texture features and 64 neurons in the input/hidden layers of the neural network. Tested on isotropic CT image stacks of 24 patients with the pathologic lungs, we obtained accuracies of 86% and 87% for identifying fissure regions using the GLCM and GLRLM methods, respectively. These accuracies compare favorably with surgeons/radiologists' accuracy of 80% for identifying fissure regions in clinical settings. This shows promising potential for segmenting lung lobes using the GLCM and GLRLM methods.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Z.; Hong, J.; Zhang, J.
2013-12-15
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results onmore » axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.« less
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing.
Yang, Z; Hong, J; Zhang, J; Wang, M Y; Zhu, Y
2013-12-01
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements' repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings.
Gallium 67 citrate scanning and serum angiotensin converting enzyme levels in sarcoidosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R.G.; Bekerman, C.; Sicilian, L.
1982-09-01
Gallium 67 citrate scans and serum angiotension converting enzyme (ACE) levels were obtained in 54 patients with sarcoidosis and analyzed in relation to clinical manifestation. /sup 67/Ga scans were abnormal in 97% of patients with clinically active disease (n = 30) and in 71% of patients with inactive disease (n = 24). Serum ACE levels were abnormally high (2 standard deviations above the control mean) in 73% of patients with clinically active disease and in 54% of patients with inactive disease. Serum ACE levels correlated significantly with /sup 67/Ga uptake score (r = .436; p < .005). The frequency ofmore » abnormal /sup 67/Ga scans and elevated serum ACE levels suggests that inflammatory activity with little or no clinical expression is common in sarcoidosis. Abnormal /sup 67/Ga scans were highly sensitive (97%) but had poor specificity (29%) to clinical disease activity. The accuracy of negative prediction of clinical activity by normal scans (87%) was better than the accuracy of positive prediction of clinical activity by abnormal scans (63%). /sup 67/Ga scans can be used to support the clinical identification of inactive sarcoidosis.« less
Computerized lung cancer malignancy level analysis using 3D texture features
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang; Zhang, Jianying; Qian, Wei
2016-03-01
Based on the likelihood of malignancy, the nodules are classified into five different levels in Lung Image Database Consortium (LIDC) database. In this study, we tested the possibility of using threedimensional (3D) texture features to identify the malignancy level of each nodule. Five groups of features were implemented and tested on 172 nodules with confident malignancy levels from four radiologists. These five feature groups are: grey level co-occurrence matrix (GLCM) features, local binary pattern (LBP) features, scale-invariant feature transform (SIFT) features, steerable features, and wavelet features. Because of the high dimensionality of our proposed features, multidimensional scaling (MDS) was used for dimension reduction. RUSBoost was applied for our extracted features for classification, due to its advantages in handling imbalanced dataset. Each group of features and the final combined features were used to classify nodules highly suspicious for cancer (level 5) and moderately suspicious (level 4). The results showed that the area under the curve (AUC) and accuracy are 0.7659 and 0.8365 when using the finalized features. These features were also tested on differentiating benign and malignant cases, and the reported AUC and accuracy were 0.8901 and 0.9353.
Wagstaff, Graham F; MacVeigh, Jo; Boston, Richard; Scott, Lisa; Brunas-Wagstaff, Jo; Cole, Jon
2003-01-01
The authors conducted 2 studies to assess the effects of levels of violence, the presence of a weapon, and the age of the witness on the accuracy of eyewitness testimony in real-life crime situations. Descriptions of offenders were taken from eyewitnesses' statements obtained by the police and were compared with the actual details of the same offenders obtained on arrest. The results showed that eyewitnesses tended to recall the offenders' hairstyle and hair color most accurately. None of the effects for the level of violence, the presence of a weapon, or age approached statistical significance, with the exception that, in the 1st study, accuracy in describing hair color was better when associated with high levels of violence and in cases of rape. It is argued that caution must be exercised in generalizing from laboratory studies of eyewitness testimony to actual crime situations.
Adaptation and Fallibility in Experts' Judgments of Novice Performers
ERIC Educational Resources Information Center
Larson, Jeffrey S.; Billeter, Darron M.
2017-01-01
Competition judges are often selected for their expertise, under the belief that a high level of performance expertise should enable accurate judgments of the competitors. Contrary to this assumption, we find evidence that expertise can reduce judgment accuracy. Adaptation level theory proposes that discriminatory capacity decreases with greater…
High-accurate optical fiber liquid level sensor
NASA Astrophysics Data System (ADS)
Sun, Dexing; Chen, Shouliu; Pan, Chao; Jin, Henghuan
1991-08-01
A highly accurate optical fiber liquid level sensor is presented. The single-chip microcomputer is used to process and control the signal. This kind of sensor is characterized by self-security and is explosion-proof, so it can be applied in any liquid level detecting areas, especially in the oil and chemical industries. The theories and experiments about how to improve the measurement accuracy are described. The relative error for detecting the measurement range 10 m is up to 0.01%.
Emergency positioning system accuracy with infrared LEDs in high-security facilities
NASA Astrophysics Data System (ADS)
Knoch, Sierra N.; Nelson, Charles; Walker, Owens
2017-05-01
Instantaneous personnel location presents a challenge in Department of Defense applications where high levels of security restrict real-time tracking of crew members. During emergency situations, command and control requires immediate accountability of all personnel. Current radio frequency (RF) based indoor positioning systems can be unsuitable due to RF leakage and electromagnetic interference with sensitively calibrated machinery on variable platforms like ships, submarines and high-security facilities. Infrared light provide a possible solution to this problem. This paper proposes and evaluates an indoor line-of-sight positioning system that is comprised of IR and high-sensitivity CMOS camera receivers. In this system the movement of the LEDs is captured by the camera, uploaded and analyzed; the highest point of power is located and plotted to create a blueprint of crewmember location. Results provided evaluate accuracy as a function of both wavelength and environmental conditions. Research will further evaluate the accuracy of the LED transmitter and CMOS camera receiver system. Transmissions in both the 780 and 850nm IR are analyzed.
NASA Technical Reports Server (NTRS)
Kranz, David William
2010-01-01
The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.
Development and calibration of an accurate 6-degree-of-freedom measurement system with total station
NASA Astrophysics Data System (ADS)
Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui
2016-12-01
To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.
Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence
2018-01-01
The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.
The performance of flash glucose monitoring in critically ill patients with diabetes.
Ancona, Paolo; Eastwood, Glenn M; Lucchetta, Luca; Ekinci, Elif I; Bellomo, Rinaldo; Mårtensson, Johan
2017-06-01
Frequent glucose monitoring may improve glycaemic control in critically ill patients with diabetes. We aimed to assess the accuracy of a novel subcutaneous flash glucose monitor (FreeStyle Libre [Abbott Diabetes Care]) in these patients. We applied the FreeStyle Libre sensor to the upper arm of eight patients with diabetes in the intensive care unit and obtained hourly flash glucose measurements. Duplicate recordings were obtained to assess test-retest reliability. The reference glucose level was measured in arterial or capillary blood. We determined numerical accuracy using Bland- Altman methods, the mean absolute relative difference (MARD) and whether the International Organization for Standardization (ISO) and Clinical and Laboratory Standards Institute Point of Care Testing (CLSI POCT) criteria were met. Clarke error grid (CEG) and surveillance error grid (SEG) analyses were used to determine clinical accuracy. We compared 484 duplicate flash glucose measurements and observed a Pearson correlation coefficient of 0.97 and a coefficient of repeatability of 1.6 mmol/L. We studied 185 flash readings paired with arterial glucose levels, and 89 paired with capillary glucose levels. Using the arterial glucose level as the reference, we found a mean bias of 1.4 mmol/L (limits of agreement, -1.7 to 4.5 mmol/L). The MARD was 14% (95% CI, 12%-16%) and the proportion of measurements meeting ISO and CLSI POCT criteria was 64.3% and 56.8%, respectively. The proportions of values within a low-risk zone on CEG and SEG analyses were 97.8% and 99.5%, respectively. Using capillary glucose levels as the reference, we found that numerical and clinical accuracy were lower. The subcutaneous FreeStyle Libre blood glucose measurement system showed high test-retest reliability and acceptable accuracy when compared with arterial blood glucose measurement in critically ill patients with diabetes.
Measuring water level in rivers and lakes from lightweight Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Bandini, Filippo; Jakobsen, Jakob; Olesen, Daniel; Reyna-Gutierrez, Jose Antonio; Bauer-Gottwein, Peter
2017-05-01
The assessment of hydrologic dynamics in rivers, lakes, reservoirs and wetlands requires measurements of water level, its temporal and spatial derivatives, and the extent and dynamics of open water surfaces. Motivated by the declining number of ground-based measurement stations, research efforts have been devoted to the retrieval of these hydraulic properties from spaceborne platforms in the past few decades. However, due to coarse spatial and temporal resolutions, spaceborne missions have several limitations when assessing the water level of terrestrial surface water bodies and determining complex water dynamics. Unmanned Aerial Vehicles (UAVs) can fill the gap between spaceborne and ground-based observations, and provide high spatial resolution and dense temporal coverage data, in quick turn-around time, using flexible payload design. This study focused on categorizing and testing sensors, which comply with the weight constraint of small UAVs (around 1.5 kg), capable of measuring the range to water surface. Subtracting the measured range from the vertical position retrieved by the onboard Global Navigation Satellite System (GNSS) receiver, we can determine the water level (orthometric height). Three different ranging payloads, which consisted of a radar, a sonar and an in-house developed camera-based laser distance sensor (CLDS), have been evaluated in terms of accuracy, precision, maximum ranging distance and beam divergence. After numerous flights, the relative accuracy of the overall system was estimated. A ranging accuracy better than 0.5% of the range and a maximum ranging distance of 60 m were achieved with the radar. The CLDS showed the lowest beam divergence, which is required to avoid contamination of the signal from interfering surroundings for narrow fields of view. With the GNSS system delivering a relative vertical accuracy better than 3-5 cm, water level can be retrieved with an overall accuracy better than 5-7 cm.
ExoMol molecular line lists XIX: high-accuracy computed hot line lists for H218O and H217O
NASA Astrophysics Data System (ADS)
Polyansky, Oleg L.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan; Yurchenko, Sergei N.; Ovsyannikov, Roman I.; Zobov, Nikolai F.
2017-04-01
Hot line lists for two isotopologues of water, H218O and H217O, are presented. The calculations employ newly constructed potential energy surfaces (PES), which take advantage of a novel method for using the large set of experimental energy levels for H216O to give high-quality predictions for H218O and H217O. This procedure greatly extends the energy range for which a PES can be accurately determined, allowing an accurate prediction of higher lying energy levels than are currently known from direct laboratory measurements. This PES is combined with a high-accuracy, ab initio dipole moment surface of water in the computation of all energy levels, transition frequencies and associated Einstein A coefficients for states with rotational excitation up to J = 50 and energies up to 30 000 cm-1. The resulting HotWat78 line lists complement the well-used BT2 H216O line list. Full line lists are made available online as Supporting Information and at www.exomol.com.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.
Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R
2009-07-01
The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/
NASA Astrophysics Data System (ADS)
Cooper, H.; Zhang, C.; Sirianni, M.
2016-12-01
South Florida relies upon the health of the Everglades, the largest subtropical wetland in North America, as a vital source of water. Since the late 1800's, this imperiled ecosystem has been highly engineered to meet human needs of flood control and water use. The Comprehensive Everglades Restoration Plan (CERP) was initiated in 2000 to restore original water flows to the Everglades and improve overall ecosystem health, while also aiming to achieve balance with human water usage. Due to subtle changes in the Everglades terrain, better vertical accuracy elevation data are needed to model groundwater and surface water levels that are integral to monitoring the effects of restoration under impacts such as sea-level rise. The current best available elevation datasets for the coastal Everglades include High Accuracy Elevation Data (HAED) and Florida Department of Emergency Management (FDEM) Light Detection and Ranging (LiDAR). However, the horizontal resolution of the HAED data is too coarse ( 400 m) for fine scale mapping, and the LiDAR data does not contain an accuracy assessment for coastal Everglades' vegetation communities. The purpose of this study is to develop a framework for generating better vertical accuracy and horizontal resolution Digital Elevation Models in the Flamingo District of Everglades National Park. In the framework, field work is conducted to collect RTK GPS and total station elevation measurements for mangrove swamp, coastal prairies, and freshwater marsh, and the proposed accuracy assessment and elevation modeling methodology is integrated with a Geographical Information System (GIS). It is anticipated that this study will provide more accurate models of the soil substrate elevation that can be used by restoration planners to better predict the future state of the Everglades ecosystem.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1-98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O`Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1–98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting. PMID:26262681
Fuller, Douglas O; Parenti, Michael S; Gad, Adel M; Beier, John C
2012-01-01
Irrigation along the Nile River has resulted in dramatic changes in the biophysical environment of Upper Egypt. In this study we used a combination of MODIS 250 m NDVI data and Landsat imagery to identify areas that changed from 2001-2008 as a result of irrigation and water-level fluctuations in the Nile River and nearby water bodies. We used two different methods of time series analysis -- principal components (PCA) and harmonic decomposition (HD), applied to the MODIS 250 m NDVI images to derive simple three-class land cover maps and then assessed their accuracy using a set of reference polygons derived from 30 m Landsat 5 and 7 imagery. We analyzed our MODIS 250 m maps against a new MODIS global land cover product (MOD12Q1 collection 5) to assess whether regionally specific mapping approaches are superior to a standard global product. Results showed that the accuracy of the PCA-based product was greater than the accuracy of either the HD or MOD12Q1 products for the years 2001, 2003, and 2008. However, the accuracy of the PCA product was only slightly better than the MOD12Q1 for 2001 and 2003. Overall, the results suggest that our PCA-based approach produces a high level of user and producer accuracies, although the MOD12Q1 product also showed consistently high accuracy. Overlay of 2001-2008 PCA-based maps showed a net increase of 12 129 ha of irrigated vegetation, with the largest increase found from 2006-2008 around the Districts of Edfu and Kom Ombo. This result was unexpected in light of ambitious government plans to develop 336 000 ha of irrigated agriculture around the Toshka Lakes.
Diagnostic Accuracy of the Slump Test for Identifying Neuropathic Pain in the Lower Limb.
Urban, Lawrence M; MacNeil, Brian J
2015-08-01
Diagnostic accuracy study with nonconsecutive enrollment. To assess the diagnostic accuracy of the slump test for neuropathic pain (NeP) in those with low to moderate levels of chronic low back pain (LBP), and to determine whether accuracy of the slump test improves by adding anatomical or qualitative pain descriptors. Neuropathic pain has been linked with poor outcomes, likely due to inadequate diagnosis, which precludes treatment specific for NeP. Current diagnostic approaches are time consuming or lack accuracy. A convenience sample of 21 individuals with LBP, with or without radiating leg pain, was recruited. A standardized neurosensory examination was used to determine the reference diagnosis for NeP. Afterward, the slump test was administered to all participants. Reports of pain location and quality produced during the slump test were recorded. The neurosensory examination designated 11 of the 21 participants with LBP/sciatica as having NeP. The slump test displayed high sensitivity (0.91), moderate specificity (0.70), a positive likelihood ratio of 3.03, and a negative likelihood ratio of 0.13. Adding the criterion of pain below the knee significantly increased specificity to 1.00 (positive likelihood ratio = 11.9). Pain-quality descriptors did not improve diagnostic accuracy. The slump test was highly sensitive in identifying NeP within the study sample. Adding a pain-location criterion improved specificity. Combining the diagnostic outcomes was very effective in identifying all those without NeP and half of those with NeP. Limitations arising from the small and narrow spectrum of participants with LBP/sciatica sampled within the study prevent application of the findings to a wider population. Diagnosis, level 4-.
ERIC Educational Resources Information Center
Korat, Ofra
2011-01-01
The relationship between mothers' and teachers' estimations of 60 children's literacy level and their actual performance were investigated in two different socio-economic status (SES) groups: low (LSES) and high (HSES). The children's reading (fluency, accuracy and comprehension) and spelling levels were measured. The mothers evaluated their own…
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
Fine Particulate Matter Predictions Using High Resolution Aerosol Optical Depth (AOD) Retrievals
NASA Technical Reports Server (NTRS)
Chudnovsky, Alexandra A.; Koutrakis, Petros; Kloog, Itai; Melly, Steven; Nordio, Francesco; Lyapustin, Alexei; Wang, Jujie; Schwartz, Joel
2014-01-01
To date, spatial-temporal patterns of particulate matter (PM) within urban areas have primarily been examined using models. On the other hand, satellites extend spatial coverage but their spatial resolution is too coarse. In order to address this issue, here we report on spatial variability in PM levels derived from high 1 km resolution AOD product of Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm developed for MODIS satellite. We apply day-specific calibrations of AOD data to predict PM(sub 2.5) concentrations within the New England area of the United States. To improve the accuracy of our model, land use and meteorological variables were incorporated. We used inverse probability weighting (IPW) to account for nonrandom missingness of AOD and nested regions within days to capture spatial variation. With this approach we can control for the inherent day-to-day variability in the AOD-PM(sub 2.5) relationship, which depends on time-varying parameters such as particle optical properties, vertical and diurnal concentration profiles and ground surface reflectance among others. Out-of-sample "ten-fold" cross-validation was used to quantify the accuracy of model predictions. Our results show that the model-predicted PM(sub 2.5) mass concentrations are highly correlated with the actual observations, with out-of- sample R(sub 2) of 0.89. Furthermore, our study shows that the model captures the pollution levels along highways and many urban locations thereby extending our ability to investigate the spatial patterns of urban air quality, such as examining exposures in areas with high traffic. Our results also show high accuracy within the cities of Boston and New Haven thereby indicating that MAIAC data can be used to examine intra-urban exposure contrasts in PM(sub 2.5) levels.
Uncertainty of OpenStreetMap data for the road network in Cyprus
NASA Astrophysics Data System (ADS)
Demetriou, Demetris
2016-08-01
Volunteered geographic information (VGI) refers to the geographic data compiled and created by individuals which are rendered on the Internet through specific web-based tools for diverse areas of interest. One of the most well-known VGI projects is the OpenStreetMap (OSM) that provides worldwide free geospatial data representing a variety of features. A critical issue for all VGI initiatives is the quality of the information offered. Thus, this report looks into the uncertainty of the OSM dataset for the main road network in Cyprus. The evaluation is based on three basic quality standards, namely positional accuracy, completeness and attribute accuracy. The work has been carried out by employing the Model Builder of ArcGIS which facilitated the comparison between the OSM data and the authoritative data provided by the Public Works Department (PWD). Findings showed that the positional accuracy increases with the hierarchical level of a road, it varies per administrative District and around 70% of the roads have a positional accuracy within 6m compared to the reference dataset. Completeness in terms of road length difference is around 25% for three out of four road categories examined and road name completeness is 100% and around 40% for higher and lower level roads, respectively. Attribute accuracy focusing on road name is very high for all levels of roads. These outputs indicate that OSM data are good enough if they fit for the purpose of use. Furthermore, the study revealed some weaknesses of the methods used for calculating the positional accuracy, suggesting the need for methodological improvements.
NASA Astrophysics Data System (ADS)
Tamimi, E.; Ebadi, H.; Kiani, A.
2017-09-01
Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.
Design of micro bending deformer for optical fiber weight sensor
NASA Astrophysics Data System (ADS)
Ula, R. K.; Hanto, D.; Waluyo, T. B.; Adinanta, H.; Widiyatmoko, B.
2017-04-01
The road damage due to excessive load is one of the causes of accidents on the road. A device to measure weight of the passing vehicles needs to be planted in the road structure. Thus, a weight sensor for the passing vehicles is required. In this study, we designed a weight sensor for a static load based on a power loss due to a micro bending on the optical fiber flanked on a board. The following main components are used i.e. LED 1310 nm as a light source, a multimode fiber optic as a transmission media and a power meter for measuring power loss. This works focuses on obtaining a suitable deformer design for weight sensor. Experimental results show that deformer design with 1.5 mm single side has level of accuracy as 4.32% while the design with 1.5 mm double side has level of accuracy as 98.77%. Increasing deformer length to 2.5 mm gives 71.18% level of accuracy for single side, and 76.94% level of accuracy for double side. Micro bending design with 1.5 mm double side has a high sensitivity and it is also capable of measuring load up to 100 kg. The sensor designed has been tested for measuring the weight of motor cycle, and it can be upgraded for measuring heavy vehicles.
Wang, Xijuan; An, Peng; Zeng, Jiling; Liu, Xiaoyan; Wang, Bo; Fang, Xuexian; Wang, Fudi; Ren, Guoping; Min, Junxia
2017-03-14
Ferritin is highly expressed in many cancer types. Although a few studies have reported an association between high serum ferritin levels and an increased risk of prostate cancer, the results are inconsistent. Therefore, we performed a large case-control study consisting of 2002 prostate cancer patients and 951 control patients with benign prostatic hyperplasia (BPH). We found that high ferritin levels were positively associated with increased serum prostate-specific antigen (PSA) levels and prostate cancer risk; each 100 ng/ml increase in serum ferritin increased the odds ratio (OR) by 1.20 (95% CI: 1.13-1.36). In the prostate cancer group, increased serum ferritin levels were significantly correlated with higher Gleason scores (p < 0.001). Notably, serum PSA values had even higher predictive accuracy among prostate cancer patients with serum ferritin levels > 400 ng/ml (Gleason score + total PSA correlation: r = 0.38; Gleason score + free PSA correlation: r = 0.49). Moreover, using immunohistochemistry, we found that prostate tissue ferritin levels were significantly higher (p < 0.001) in prostate cancer patients (n = 129) compared to BPH controls (n = 31). Prostate tissue ferritin levels were also highly correlated with serum ferritin when patients were classified by cancer severity (r = 0.81). Importantly, we found no correlation between serum ferritin levels and the inflammation marker C-reactive protein (CRP) in prostate cancer patients. In conclusion, serum ferritin is significantly associated with prostate cancer and may serve as a non-invasive biomarker to complement the PSA test in the diagnosis and prognostic evaluation of prostate cancer.
NASA Technical Reports Server (NTRS)
Poulton, C. E.; Faulkner, D. P.; Johnson, J. R.; Mouat, D. A.; Schrumpf, B. J.
1971-01-01
A high altitude photomosaic resource map of Site 29 was produced which provided an opportunity to test photo interpretation accuracy of natural vegetation resource features when mapped at a small (1:133,400) scale. Helicopter reconnaissance over 144 previously selected test points revealed a highly adequate level of photo interpretation accuracy. In general, the reasons for errors could be accounted for. The same photomosaic resource map enabled construction of interpretive land use overlays. Based on features of the landscape, including natural vegetation types, judgements for land use suitability were made and have been presented for two types of potential land use. These two, agriculture and urbanization, represent potential land use conflicts.
Processing of high-precision ceramic balls with a spiral V-groove plate
NASA Astrophysics Data System (ADS)
Feng, Ming; Wu, Yongbo; Yuan, Julong; Ping, Zhao
2017-03-01
As the demand for high-performance bearings gradually increases, ceramic balls with excellent properties, such as high accuracy, high reliability, and high chemical durability used, are extensively used for highperformance bearings. In this study, a spiral V-groove plate method is employed in processing high-precision ceramic balls. After the kinematic analysis of the ball-spin angle and enveloped lapping trajectories, an experimental rig is constructed and experiments are conducted to confirm the feasibility of this method. Kinematic analysis results indicate that the method not only allows for the control of the ball-spin angle but also uniformly distributes the enveloped lapping trajectories over the entire ball surface. Experimental results demonstrate that the novel spiral Vgroove plate method performs better than the conventional concentric V-groove plate method in terms of roundness, surface roughness, diameter difference, and diameter decrease rate. Ceramic balls with a G3-level accuracy are achieved, and their typical roundness, minimum surface roughness, and diameter difference are 0.05, 0.0045, and 0.105 μm, respectively. These findings confirm that the proposed method can be applied to high-accuracy and high-consistency ceramic ball processing.
NASA Technical Reports Server (NTRS)
Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir
2006-01-01
Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.
Template-guided vs. non-guided drilling in site preparation of dental implants.
Scherer, Uta; Stoetzer, Marcus; Ruecker, Martin; Gellrich, Nils-Claudius; von See, Constantin
2015-07-01
Clinical success of oral implants is related to primary stability and osseointegration. These parameters are associated with delicate surgical techniques. We herein studied whether template-guided drilling has a significant influence on drillholes diameter and accuracy in an in vitro model. Fresh cadaveric porcine mandibles were used for drilling experiments of four experimental groups. Each group consisted of three operators, comparing guide templates for drilling with free-handed procedure. Operators without surgical knowledge were grouped together, contrasting highly experienced oral surgeons in other groups. A total of 180 drilling actions were performed, and diameters were recorded at multiple depth levels, with a precision measuring instrument. Template-guided drilling procedure improved accuracy on a very significant level in comparison with free-handed drilling operation (p ≤ 0.001). Inaccuracy of free-handed drilling became more significant in relation to measurement depth. High homogenic uniformity of template-guided drillholes was significantly stronger than unguided drilling operations by highly experienced oral surgeons (p ≤ 0.001). Template-guided drilling procedure leads to significantly enhanced accuracy. Significant results compared to free-handed drilling actions were achieved, irrespective of the clinical experience level of the operator. Template-guided drilling procedures lead to a more predictable clinical diameter. It shows that any set of instruments has to be carefully chosen to match the specific implant system. The current in vitro study is implicating an improvement of implant bed preparation but needs to be confirmed in clinical studies.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Karnowski, Thomas P [Knoxville, TN; Tobin, Jr., Kenneth W.; Muthusamy Govindasamy, Vijaya Priya [Knoxville, TN; Chaum, Edward [Memphis, TN
2012-07-10
A method for assigning a confidence metric for automated determination of optic disc location that includes analyzing a retinal image and determining at least two sets of coordinates locating an optic disc in the retinal image. The sets of coordinates can be determined using first and second image analysis techniques that are different from one another. An accuracy parameter can be calculated and compared to a primary risk cut-off value. A high confidence level can be assigned to the retinal image if the accuracy parameter is less than the primary risk cut-off value and a low confidence level can be assigned to the retinal image if the accuracy parameter is greater than the primary risk cut-off value. The primary risk cut-off value being selected to represent an acceptable risk of misdiagnosis of a disease having retinal manifestations by the automated technique.
Wellenberg, R H H; Boomsma, M F; van Osch, J A C; Vlassenbroek, A; Milles, J; Edens, M A; Streekstra, G J; Slump, C H; Maas, M
2017-05-01
To compare quantitative measures of image quality, in terms of CT number accuracy, noise, signal-to-noise-ratios (SNRs), and contrast-to-noise ratios (CNRs), at different dose levels with filtered-back-projection (FBP), iterative reconstruction (IR), and model-based iterative reconstruction (MBIR) alone and in combination with orthopedic metal artifact reduction (O-MAR) in a total hip arthroplasty (THA) phantom. Scans were acquired from high- to low-dose (CTDI vol : 40.0, 32.0, 24.0, 16.0, 8.0, and 4.0 mGy) at 120- and 140- kVp. Images were reconstructed using FBP, IR (iDose 4 level 2, 4, and 6) and MBIR (IMR, level 1, 2, and 3) with and without O-MAR. CT number accuracy in Hounsfield Units (HU), noise or standard deviation, SNRs, and CNRs were analyzed. The IMR technique showed lower noise levels (p < 0.01), higher SNRs (p < 0.001) and CNRs (p < 0.001) compared with FBP and iDose 4 in all acquisitions from high- to low-dose with constant CT numbers. O-MAR reduced noise (p < 0.01) and improved SNRs (p < 0.01) and CNRs (p < 0.001) while improving CT number accuracy only at a low dose. At the low dose of 4.0 mGy, IMR level 1, 2, and 3 showed 83%, 89%, and 95% lower noise values, a factor 6.0, 9.2, and 17.9 higher SNRs, and 5.7, 8.8, and 18.2 higher CNRs compared with FBP respectively. Based on quantitative analysis of CT number accuracy, noise values, SNRs, and CNRs, we conclude that the combined use of IMR and O-MAR enables a reduction in radiation dose of 83% compared with FBP and iDose 4 in the CT imaging of a THA phantom.
Interactive lesion segmentation with shape priors from offline and online learning.
Shepherd, Tony; Prince, Simon J D; Alexander, Daniel C
2012-09-01
In medical image segmentation, tumors and other lesions demand the highest levels of accuracy but still call for the highest levels of manual delineation. One factor holding back automatic segmentation is the exemption of pathological regions from shape modelling techniques that rely on high-level shape information not offered by lesions. This paper introduces two new statistical shape models (SSMs) that combine radial shape parameterization with machine learning techniques from the field of nonlinear time series analysis. We then develop two dynamic contour models (DCMs) using the new SSMs as shape priors for tumor and lesion segmentation. From training data, the SSMs learn the lower level shape information of boundary fluctuations, which we prove to be nevertheless highly discriminant. One of the new DCMs also uses online learning to refine the shape prior for the lesion of interest based on user interactions. Classification experiments reveal superior sensitivity and specificity of the new shape priors over those previously used to constrain DCMs. User trials with the new interactive algorithms show that the shape priors are directly responsible for improvements in accuracy and reductions in user demand.
Morotti, A; Romero, J M; Jessel, M J; Brouwers, H B; Gupta, R; Schwab, K; Vashkevich, A; Ayres, A; Anderson, C D; Gurol, M E; Viswanathan, A; Greenberg, S M; Rosand, J; Goldstein, J N
2016-05-19
Reduction of CT tube current is an effective strategy to minimize radiation load. However, tube current is also a major determinant of image quality. We investigated the impact of CTA tube current on spot sign detection and diagnostic performance for intracerebral hemorrhage expansion. We retrospectively analyzed a prospectively collected cohort of consecutive patients with primary intracerebral hemorrhage from January 2001 to April 2015 who underwent CTA. The study population was divided into 2 groups according to the median CTA tube current level: low current (<350 mA) and high current (≥350 mA). CTA first-pass readings for spot sign presence were independently analyzed by 2 readers. Baseline and follow-up hematoma volumes were assessed by semiautomated computer-assisted volumetric analysis. Sensitivity, specificity, positive and negative predictive values, and accuracy of spot sign in predicting hematoma expansion were calculated. This study included 709 patients (288 and 421 in the low- and high-current groups, respectively). A higher proportion of low-current scans identified at least 1 spot sign (20.8% versus 14.7%, P = .034), but hematoma expansion frequency was similar in the 2 groups (18.4% versus 16.2%, P = .434). Sensitivity and positive and negative predictive values were not significantly different between the 2 groups. Conversely, high-current scans showed superior specificity (91% versus 84%, P = .015) and overall accuracy (84% versus 77%, P = .038). CTA obtained at high levels of tube current showed better diagnostic accuracy for prediction of hematoma expansion by using spot sign. These findings may have implications for future studies using the CTA spot sign to predict hematoma expansion for clinical trials. © 2016 American Society of Neuroradiology.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
Video modeling to train staff to implement discrete-trial instruction.
Catania, Cynthia N; Almeida, Daniel; Liu-Constant, Brian; DiGennaro Reed, Florence D
2009-01-01
Three new direct-service staff participated in a program that used a video model to train target skills needed to conduct a discrete-trial session. Percentage accuracy in completing a discrete-trial teaching session was evaluated using a multiple baseline design across participants. During baseline, performances ranged from a mean of 12% to 63% accuracy. During video modeling, there was an immediate increase in accuracy to a mean of 98%, 85%, and 94% for each participant. Performance during maintenance and generalization probes remained at high levels. Results suggest that video modeling can be an effective technique to train staff to conduct discrete-trial sessions.
Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors
NASA Astrophysics Data System (ADS)
Holmes, C. S.; Headley, M.; Hart, P. W.
2017-08-01
Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.
Roles of an Upper-Body Compression Garment on Athletic Performances.
Hooper, David R; Dulkis, Lexie L; Secola, Paul J; Holtzum, Gabriel; Harper, Sean P; Kalkowski, Ryan J; Comstock, Brett A; Szivak, Tunde K; Flanagan, Shawn D; Looney, David P; DuPont, William H; Maresh, Carl M; Volek, Jeff S; Culley, Kevin P; Kraemer, William J
2015-09-01
Compression garments (CGs) have been previously shown to enhance proprioception; however, this benefit has not been previously shown to transfer to improved performance in sports skills. The purpose of this study was to assess whether enhanced proprioception and comfort can be manifested in improved sports performance of high-level athletes. Eleven Division I collegiate pitchers (age: 21.0 ± 2.9 years; height: 181.0 ± 4.6 cm; weight: 89.0 ± 13.0 kg; body fat: 12.0 ± 4.1%) and 10 Division I collegiate golfers (age: 20.0 ± 1.3 years; height: 178.1 ± 3.9 cm; weight: 76.4 ± 8.3 kg; body fat: 11.8 ± 2.6%) participated in the study. A counterbalanced within-group design was used. Subjects performed the respective baseball or golf protocol wearing either typical noncompressive (NC) or the experimental CG. Golfers participated in an assessment of driving distance and accuracy, as well as approach shot, chipping, and putting accuracy. Pitchers were assessed for fastball accuracy and velocity. In pitchers, there was a significant (p ≤ 0.05) improvement in fastball accuracy (NC: 0.30 ± 0.04 vs. CG: 0.21 ± 0.07 cm). There were no differences in pitching velocity. In golfers, there were significant (p ≤ 0.05) improvements in driving accuracy (NC: 86.7 ± 30.6 vs. CG: 68.9 ± 18.5 feet), as well as approach shot accuracy (NC: 26.6 ± 11.9 vs. CG: 22.1 ± 8.2 feet) and chipping accuracy (NC: 2.9 ± 0.6 vs. CG: 2.3 ± 0.6 inch). There was also a significant (p ≤ 0.05) increase in comfort for the golfers (NC: 3.7 ± 0.8 vs. CG: 4.5 ± 1.0). These results demonstrate that comfort and performance can be improved with the use of CGs in high-level athletes being most likely mediated by improved proprioceptive cues during upper-body movements.
Four years of Landsat-7 on-orbit geometric calibration and performance
Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.
2004-01-01
Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.
Murphy, S F; Lenihan, L; Orefuwa, F; Colohan, G; Hynes, I; Collins, C G
2017-05-01
The discharge letter is a key component of the communication pathway between the hospital and primary care. Accuracy and timeliness of delivery are crucial to ensure continuity of patient care. Electronic discharge summaries (EDS) and prescriptions have been shown to improve quality of discharge information for general practitioners (GPs). The aim of this study was to evaluate the effect of a new EDS on GP satisfaction levels and accuracy of discharge diagnosis. A GP survey was carried out whereby semi-structured interviews were conducted with 13 GPs from three primary care centres who receive a high volume of discharge letters from the hospital. A chart review was carried out on 90 charts to compare accuracy of ICD-10 coding of Non-Consultant Hospital Doctors (NCHDs) with that of trained Hopital In-Patient Enquiry (HIPE) coders. GP satisfaction levels were over 90 % with most aspects of the EDS, including amount of information (97 %), accuracy (95 %), GP information and follow-up (97 %) and medications (91 %). 70 % of GPs received the EDS within 2 weeks. ICD-10 coding of discharge diagnosis by NCHDs had an accuracy of 33 %, compared with 95.6 % when done by trained coders (p < 0.00001). The introduction of the EDS and prescription has led to improved quality of timeliness of communication with primary care. It has led to a very high satisfaction rating with GPs. ICD-10 coding was found to be grossly inaccurate when carried out by NCHDs and it is more appropriate for this task to be carried out by trained coders.
Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus
2018-06-01
Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.
Teipel, Stefan J; Keller, Felix; Thyrian, Jochen R; Strohmaier, Urs; Altiner, Attila; Hoffmann, Wolfgang; Kilimann, Ingo
2017-01-01
Once a patient or a knowledgeable informant has noticed decline in memory or other cognitive functions, initiation of early dementia assessment is recommended. Hippocampus and cholinergic basal forebrain (BF) volumetry supports the detection of prodromal and early stages of Alzheimer's disease (AD) dementia in highly selected patient populations. To compare effect size and diagnostic accuracy of hippocampus and BF volumetry between patients recruited in highly specialized versus primary care and to assess the effect of white matter lesions as a proxy for cerebrovascular comorbidity on diagnostic accuracy. We determined hippocampus and BF volumes and white matter lesion load from MRI scans of 71 participants included in a primary care intervention trial (clinicaltrials.gov identifier: NCT01401582) and matched 71 participants stemming from a memory clinic. Samples included healthy controls and people with mild cognitive impairment (MCI), AD dementia, mixed dementia, and non-AD related dementias. Volumetric measures reached similar effect sizes and cross-validated levels of accuracy in the primary care and the memory clinic samples for the discrimination of AD and mixed dementia cases from healthy controls. In the primary care MCI cases, volumetric measures reached only random guessing levels of accuracy. White matter lesions had only a modest effect on effect size and diagnostic accuracy. Hippocampus and BF volumetry may usefully be employed for the identification of AD and mixed dementia, but the detection of MCI does not benefit from the use of these volumetric markers in a primary care setting.
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
Desiccant Cooling Poised for Entry into Mainstream Markets
annual sales if costs can be lowered to competitive levels. Near-term markets include hospitals, nursing , which measures wheel performance factors with high accuracy in accordance with American Society for
Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation
NASA Astrophysics Data System (ADS)
Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.
2016-06-01
Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.
Chiang, Michael F.; Melia, Michele; Buffenn, Angela N.; Lambert, Scott R.; Recchia, Franco M.; Simpson, Jennifer L.; Yang, Michael B.
2013-01-01
Objective To evaluate the accuracy of detecting clinically significant retinopathy of prematurity (ROP) using wide-angle digital retinal photography. Methods Literature searches of PubMed and the Cochrane Library databases were conducted last on December 7, 2010, and yielded 414 unique citations. The authors assessed these 414 citations and marked 82 that potentially met the inclusion criteria. These 82 studies were reviewed in full text; 28 studies met inclusion criteria. The authors extracted from these studies information about study design, interventions, outcomes, and study quality. After data abstraction, 18 were excluded for study deficiencies or because they were superseded by a more recent publication. The methodologist reviewed the remaining 10 studies and assigned ratings of evidence quality; 7 studies were rated level I evidence and 3 studies were rated level III evidence. Results There is level I evidence from ≥5 studies demonstrating that digital retinal photography has high accuracy for detection of clinically significant ROP. Level III studies have reported high accuracy, without any detectable complications, from real-world operational programs intended to detect clinically significant ROP through remote site interpretation of wide-angle retinal photographs. Conclusions Wide-angle digital retinal photography has the potential to complement standard ROP care. It may provide advantages through objective documentation of clinical examination findings, improved recognition of disease progression by comparing previous photographs, and the creation of image libraries for education and research. Financial Disclosure(s) Proprietary or commercial disclosure may be found after the references. PMID:22541632
The effects of stress on singing voice accuracy.
Larrouy-Maestri, Pauline; Morsomme, Dominique
2014-01-01
The quality of a music performance can be lessened or enhanced if the performer experiences stressful conditions. In addition, the quality of a sung performance requires control of the fundamental frequency of the voice, which is particularly sensitive to stress. The present study aimed to clarify the effects of stress on singing voice accuracy. Thirty-one music students were recorded in a stressful condition (ie, a music examination) and a nonstressful condition. Two groups were defined according to the challenge level of the music examination (first and second music levels). Measurements were made by self-reported state anxiety (CSAI-2R questionnaire) and by observing heart rate activity (electrocardiogram) during each performance. In addition, the vocal accuracy of the sung performances was objectively analyzed. As expected, state anxiety and heart rate were significantly higher on the day of the music examination than in the nonstressful condition for all the music students. However, the effect of stress was positive for the first-year students but negative for the second-year students, for whom the music examination was particularly challenging. In addition, highly significant correlations were found between the intensity of cognitive symptoms and the vocal accuracy criteria. This study highlights the contrasting effects of stress on singing voice accuracy but also the need to consider the challenge level and perception of the symptoms in experimental and pedagogical settings. Copyright © 2014 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Potential accuracy of translation estimation between radar and optical images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2015-10-01
This paper investigates the potential accuracy achievable for optical to radar image registration by area-based approach. The analysis is carried out mainly based on the Cramér-Rao Lower Bound (CRLB) on translation estimation accuracy previously proposed by the authors and called CRLBfBm. This bound is now modified to take into account radar image speckle noise properties: spatial correlation and signal-dependency. The newly derived theoretical bound is fed with noise and texture parameters estimated for the co-registered pair of optical Landsat 8 and radar SIR-C images. It is found that difficulty of optical to radar image registration stems more from speckle noise influence than from dissimilarity of the considered kinds of images. At finer scales (and higher speckle noise level), probability of finding control fragments (CF) suitable for registration is low (1% or less) but overall number of such fragments is high thanks to image size. Conversely, at the coarse scale, where speckle noise level is reduced, probability of finding CFs suitable for registration can be as high as 40%, but overall number of such CFs is lower. Thus, the study confirms and supports area-based multiresolution approach for optical to radar registration where coarse scales are used for fast registration "lock" and finer scales for reaching higher registration accuracy. The CRLBfBm is found inaccurate for the main scale due to intensive speckle noise influence. For other scales, the validity of the CRLBfBm bound is confirmed by calculating statistical efficiency of area-based registration method based on normalized correlation coefficient (NCC) measure that takes high values of about 25%.
Factors Influencing Science Content Accuracy in Elementary Inquiry Science Lessons
NASA Astrophysics Data System (ADS)
Nowicki, Barbara L.; Sullivan-Watts, Barbara; Shim, Minsuk K.; Young, Betty; Pockalny, Robert
2013-06-01
Elementary teachers face increasing demands to engage children in authentic science process and argument while simultaneously preparing them with knowledge of science facts, vocabulary, and concepts. This reform is particularly challenging due to concerns that elementary teachers lack adequate science background to teach science accurately. This study examined 81 in-classroom inquiry science lessons for preservice education majors and their cooperating teachers to determine the accuracy of the science content delivered in elementary classrooms. Our results showed that 74 % of experienced teachers and 50 % of student teachers presented science lessons with greater than 90 % accuracy. Eleven of the 81 lessons (9 preservice, 2 cooperating teachers) failed to deliver accurate science content to the class. Science content accuracy was highly correlated with the use of kit-based resources supported with professional development, a preference for teaching science, and grade level. There was no correlation between the accuracy of science content and some common measures of teacher content knowledge (i.e., number of college science courses, science grades, or scores on a general science content test). Our study concluded that when provided with high quality curricular materials and targeted professional development, elementary teachers learn needed science content and present it accurately to their students.
/sup 67/Ga citrate scanning and serum angiotensin converting enzyme levels in sarcoidosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, R.G.; Bekerman, C.; Sicilian, L.
1982-09-01
/sup 67/Ga citrate scans and serum angiotensin converting enzyme (ACE) levels were obtained in 54 patients with sarcoidosis and analyzed in relation to clinical manifestations. /sup 67/Ga scans were abnormal in 97% of patients with clinically active disease (n . 30) and in 71% of patients with inactive disease (n . 24). Serum ACE levels were abnormally high (2 standard deviations above the control mean) in 73% of patients with clinically active disease and in 54% of patients with inactive disease. Serum ACE levels correlated significantly with /sup 67/Ga uptake score (r..436; p less than .005). The frequency of abnormalmore » /sup 67/Ga scans and elevated serum ACE levels suggests that inflammatory activity with little or no clinical expression is common in sarcoidosis. Abnormal /sup 67/Ga scans were highly sensitive (97%) but had poor specificity (29%) to clinical disease activity. The accuracy of negative prediction of clinical activity by normal scans (87%) was better than the accuracy of positive prediction of clinical activity by abnormal scans (63%). /sup 67/Ga scans can be used to support the clinical indentification of inactive sacoidosis.« less
Parallelism measurement for base plate of standard artifact with multiple tactile approaches
NASA Astrophysics Data System (ADS)
Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie
2018-01-01
Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.
Accuracy of nursing diagnoses for identifying domestic violence against children.
Apostólico, Maíra Rosa; Egry, Emiko Yoshikawa; Fornari, Lucimara Fabiana; Gessner, Rafaela
2017-01-01
Objective Identify nursing diagnoses involving a hypothetical situation of domestic violence against a child and the respective degrees of accuracy. Method An exploratory, evaluative, case study was conducted using a quantitative and qualitative approach, with data collected using an online instrument from 26 nurses working in the Municipal Health Network, between June and August 2010, in Curitiba, and also during the first half of 2014 in São Paulo. Both of these cities are in Brazil. Nursing diagnoses and interventions from the International Classification of Nursing Practices in Collective Health were provided, and accuracy was verified using the Nursing Diagnosis Accuracy Scale. Results Thirty-nine nursing diagnoses were identified, 27 of which were common to both cities. Of these, 15 were scored at the null level of accuracy, 11 at high accuracy and 1 at medium accuracy. Conclusion The difficulty the nurses had in defining diagnoses may be associated with the fact that nursing care generally focuses on clinical problems, and signs expressing situations of domestic violence against children go unnoticed. The results demonstrated the difficulty of participants in selecting the appropriate nursing diagnosis for the case in question.
Operational shoreline mapping with high spatial resolution radar and geographic processing
Rangoonwala, Amina; Jones, Cathleen E; Chi, Zhaohui; Ramsey, Elijah W.
2017-01-01
A comprehensive mapping technology was developed utilizing standard image processing and available GIS procedures to automate shoreline identification and mapping from 2 m synthetic aperture radar (SAR) HH amplitude data. The development used four NASA Uninhabited Aerial Vehicle SAR (UAVSAR) data collections between summer 2009 and 2012 and a fall 2012 collection of wetlands dominantly fronted by vegetated shorelines along the Mississippi River Delta that are beset by severe storms, toxic releases, and relative sea-level rise. In comparison to shorelines interpreted from 0.3 m and 1 m orthophotography, the automated GIS 10 m alongshore sampling found SAR shoreline mapping accuracy to be ±2 m, well within the lower range of reported shoreline mapping accuracies. The high comparability was obtained even though water levels differed between the SAR and photography image pairs and included all shorelines regardless of complexity. The SAR mapping technology is highly repeatable and extendable to other SAR instruments with similar operational functionality.
ERIC Educational Resources Information Center
Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang
2015-01-01
Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
Are restrained eaters accurate monitors of their intoxication? Results from a field experiment.
Buchholz, Laura J; Crowther, Janis H; Olds, R Scott; Smith, Kathryn E; Ridolfi, Danielle R
2013-04-01
Brief interventions encourage college students to eat more before drinking to prevent harm (Dimeff et al., 1999), although many women decrease their caloric intake (Giles et al., 2009) and the number of eating episodes (Luce et al., 2012) prior to drinking alcohol. Participants were 37 undergraduate women (24.3% Caucasian) who were recruited from a local bar district in the Midwest. This study examined whether changes in eating after intending to drink interacted with dietary restraint to predict accuracy of one's intoxication. Results indicated that changes in eating significantly moderated the relationship between dietary restraint and accuracy of one's intoxication level. After eating more food before intending to drink, women higher in restraint were more likely to overestimate their intoxication than women lower in restraint. There were no differences between women with high levels and low levels of dietary restraint in the accuracy of their intoxication after eating less food before intending to drink. Future research would benefit from examining interoceptive awareness as a possible mechanism involved in this relationship. Published by Elsevier Ltd.
Automatic localization of cerebral cortical malformations using fractal analysis.
De Luca, A; Arrigoni, F; Romaniello, R; Triulzi, F M; Peruzzo, D; Bertoldo, A
2016-08-21
Malformations of cortical development (MCDs) encompass a variety of brain disorders affecting the normal development and organization of the brain cortex. The relatively low incidence and the extreme heterogeneity of these disorders hamper the application of classical group level approaches for the detection of lesions. Here, we present a geometrical descriptor for a voxel level analysis based on fractal geometry, then define two similarity measures to detect the lesions at single subject level. The pipeline was applied to 15 normal children and nine pediatric patients affected by MCDs following two criteria, maximum accuracy (WACC) and minimization of false positives (FPR), and proved that our lesion detection algorithm is able to detect and locate abnormalities of the brain cortex with high specificity (WACC = 85%, FPR = 96%), sensitivity (WACC = 83%, FPR = 63%) and accuracy (WACC = 85%, FPR = 90%). The combination of global and local features proves to be effective, making the algorithm suitable for the detection of both focal and diffused malformations. Compared to other existing algorithms, this method shows higher accuracy and sensitivity.
Automatic localization of cerebral cortical malformations using fractal analysis
NASA Astrophysics Data System (ADS)
De Luca, A.; Arrigoni, F.; Romaniello, R.; Triulzi, F. M.; Peruzzo, D.; Bertoldo, A.
2016-08-01
Malformations of cortical development (MCDs) encompass a variety of brain disorders affecting the normal development and organization of the brain cortex. The relatively low incidence and the extreme heterogeneity of these disorders hamper the application of classical group level approaches for the detection of lesions. Here, we present a geometrical descriptor for a voxel level analysis based on fractal geometry, then define two similarity measures to detect the lesions at single subject level. The pipeline was applied to 15 normal children and nine pediatric patients affected by MCDs following two criteria, maximum accuracy (WACC) and minimization of false positives (FPR), and proved that our lesion detection algorithm is able to detect and locate abnormalities of the brain cortex with high specificity (WACC = 85%, FPR = 96%), sensitivity (WACC = 83%, FPR = 63%) and accuracy (WACC = 85%, FPR = 90%). The combination of global and local features proves to be effective, making the algorithm suitable for the detection of both focal and diffused malformations. Compared to other existing algorithms, this method shows higher accuracy and sensitivity.
NASA Technical Reports Server (NTRS)
Freedman, Adam; Hensley, Scott; Chapin, Elaine; Kroger, Peter; Hussain, Mushtaq; Allred, Bruce
1999-01-01
GeoSAR is an airborne, interferometric Synthetic Aperture Radar (IFSAR) system for terrain mapping, currently under development by a consortium including NASA's Jet Propulsion Laboratory (JPL), Calgis, Inc., a California mapping sciences company, and the California Department of Conservation (CaIDOC), with funding provided by the U.S. Army Corps of Engineers Topographic Engineering Center (TEC) and the U.S. Defense Advanced Research Projects Agency (DARPA). IFSAR data processing requires high-accuracy platform position and attitude knowledge. On 9 GeoSAR, these are provided by one or two Honeywell Embedded GPS Inertial Navigation Units (EGI) and an Ashtech Z12 GPS receiver. The EGIs provide real-time high-accuracy attitude and moderate-accuracy position data, while the Ashtech data, post-processed differentially with data from a nearby ground station using Ashtech PNAV software, provide high-accuracy differential GPS positions. These data are optimally combined using a Kalman filter within the GeoSAR motion measurement software, and the resultant position and orientation information are used to process the dual frequency (X-band and P-band) radar data to generate high-accuracy, high -resolution terrain imagery and digital elevation models (DEMs). GeoSAR requirements specify sub-meter level planimetric and vertical accuracies for the resultant DEMS. To achieve this, platform positioning errors well below one meter are needed. The goal of GeoSAR is to obtain 25 cm or better 3-D positions from the GPS systems on board the aircraft. By imaging a set of known point target corner-cube reflectors, the GeoSAR system can be calibrated. This calibration process yields the true position of the aircraft with an uncertainty of 20- 50 cm. This process thus allows an independent assessment of the accuracy of our GPS-based positioning systems. We will present an overview of the GeoSAR motion measurement system, focusing on the use of GPS and the blending of position data from the various systems. We will present the results of our calibration studies that relate to the accuracy the GPS positioning. We will discuss the effects these positioning, errors have on the resultant DEM products and imagery.
NASA Astrophysics Data System (ADS)
Császár, Attila G.; Furtenbacher, T.; Tennyson, Jonathan; Bernath, Peter F.; Brown, Linda R.; Campargue, Alain; Daumont, Ludovic; Gamache, Robert R.; Hodges, Joseph T.; Naumenko, Olga V.; Polyansky, Oleg L.; Rothman, Laurence S.; Vandaele, Ann Carine; Zobov, Nikolai F.
2014-06-01
The results of an IUPAC Task Group formed in 2004 on "A Database of Water Transitions from Experiment and Theory" (Project No. 2004-035-1-100) are presented. Energy levels and recommended labels involving exact and approximate quantum numbers for the main isotopologues of water in the gas phase, H216O, H218O, H217O, HD16O, HD18O, HD17O, D216O, D218O, and D217O, are determined from measured transition wavenumbers. The transition wavenumbers and energy levels are validated using the MARVEL (measured active rotational-vibrational energy levels) approach and first-principles nuclear motion computations. The extensive data, e.g., more than 200,000 transitions have been handled for H216O, including lines and levels that are required for analysis and synthesis of spectra, thermochemical applications, the construction of theoretical models, and the removal of spectral contamination by ubiquitous water lines. These datasets can also be used to assess where measurements are lacking for each isotopologue and to provide accurate frequencies for many yet-to-be measured transitions. The lack of high-quality frequency calibration standards in the near infrared is identified as an issue that has hindered the determination of high-accuracy energy levels at higher frequencies. The generation of spectra using the MARVEL energy levels combined with transition intensities computed using high accuracy ab initio dipole moment surfaces are discussed.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
A kinematic/kinetic hybrid airplane simulator model : draft.
DOT National Transportation Integrated Search
2008-01-01
A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...
A kinematic/kinetic hybrid airplane simulator model.
DOT National Transportation Integrated Search
2008-01-01
A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...
LOW-LEVEL DIRECT CURRENT AMPLIFIER
Kerns, Q.A.
1959-05-01
A d-c amplifier is described. Modulation is provided between a d-c signal and an alternating current to give an output signal proportional to the d- c signal. The circuit has high sensitivity and accuracy. (T.R.H.)
Social stress reactivity alters reward and punishment learning
Frank, Michael J.; Allen, John J. B.
2011-01-01
To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems. PMID:20453038
Social stress reactivity alters reward and punishment learning.
Cavanagh, James F; Frank, Michael J; Allen, John J B
2011-06-01
To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems.
Distributed Timing and Localization (DiGiTaL)
NASA Technical Reports Server (NTRS)
D'Amico, Simone; Hunter, Roger C.; Baker, Christopher
2017-01-01
The Distributed Timing and Localization (DiGiTaL) system provides nano satellite formations with unprecedented,centimeter-level navigation accuracy in real time and nanosecond-level time synchronization. This is achieved through the integration of a multi-constellation Global Navigation Satellite System (GNSS) receiver, a Chip-Scale Atomic Clock (CSAC), and a dedicated Inter-Satellite Link (ISL). In comparison, traditional single spacecraft GNSS navigation solutions are accurate only to the meter-level due to the sole usage of coarse pseudo-range measurements. To meet the strict requirements of future miniaturized distributed space systems, DiGiTaL uses powerful error-cancelling combinations of raw carrier-phase measurements which are exchanged between the swarming nano satellites through a decentralized network. A reduced-dynamics estimation architecture on board each individual nano satellite processes the resulting millimeter-level noise measurements to reconstruct the fullformation state with high accuracy.
Robinson, Michael D; Moeller, Sara K; Fetterman, Adam K
2010-10-01
Responsiveness to negative feedback has been seen as functional by those who emphasize the value of reflecting on such feedback in self-regulating problematic behaviors. On the other hand, the very same responsiveness has been viewed as dysfunctional by its link to punishment sensitivity and reactivity. The present 4 studies, involving 203 undergraduate participants, sought to reconcile such discrepant views in the context of the trait of neuroticism. In cognitive tasks, individuals were given error feedback when they made mistakes. It was found that greater tendencies to slow down following error feedback were associated with higher levels of accuracy at low levels of neuroticism but lower levels of accuracy at high levels of neuroticism. Individual differences in neuroticism thus appear crucial in understanding whether behavioral alterations following negative feedback reflect proactive versus reactive mechanisms and processes. Implications for understanding the processing basis of neuroticism and adaptive self-regulation are discussed.
Kårstad, S B; Kvello, O; Wichstrøm, L; Berg-Nielsen, T S
2014-05-01
Parents' ability to correctly perceive their child's skills has implications for how the child develops. In some studies, parents have shown to overestimate their child's abilities in areas such as IQ, memory and language. Emotion Comprehension (EC) is a skill central to children's emotion regulation, initially learned from their parents. In this cross-sectional study we first tested children's EC and then asked parents to estimate the child's performance. Thus, a measure of accuracy between child performance and parents' estimates was obtained. Subsequently, we obtained information on child and parent factors that might predict parents' accuracy in estimating their child's EC. Child EC and parental accuracy of estimation was tested by studying a community sample of 882 4-year-olds who completed the Test of Emotion Comprehension (TEC). The parents were instructed to guess their children's responses on the TEC. Predictors of parental accuracy of estimation were child actual performance on the TEC, child language comprehension, observed parent-child interaction, the education level of the parent, and child mental health. Ninety-one per cent of the parents overestimated their children's EC. On average, parents estimated that their 4-year-old children would display the level of EC corresponding to a 7-year-old. Accuracy of parental estimation was predicted by child high performance on the TEC, child advanced language comprehension, and more optimal parent-child interaction. Parents' ability to estimate the level of their child's EC was characterized by a substantial overestimation. The more competent the child, and the more sensitive and structuring the parent was interacting with the child, the more accurate the parent was in the estimation of their child's EC. © 2013 John Wiley & Sons Ltd.
Classification of vegetation in an open landscape using full-waveform airborne laser scanner data
NASA Astrophysics Data System (ADS)
Alexander, Cici; Deák, Balázs; Kania, Adam; Mücke, Werner; Heilmeier, Hermann
2015-09-01
Airborne laser scanning (ALS) is increasingly being used for the mapping of vegetation, although the focus so far has been on woody vegetation, and ALS data have only rarely been used for the classification of grassland vegetation. In this study, we classified the vegetation of an open alkali landscape, characterized by two Natura 2000 habitat types: Pannonic salt steppes and salt marshes and Pannonic loess steppic grasslands. We generated 18 variables from an ALS dataset collected in the growing (leaf-on) season. Elevation is a key factor determining the patterns of vegetation types in the landscape, and hence 3 additional variables were based on a digital terrain model (DTM) generated from an ALS dataset collected in the dormant (leaf-off) season. We classified the vegetation into 24 classes based on these 21 variables, at a pixel size of 1 m. Two groups of variables with and without the DTM-based variables were used in a Random Forest classifier, to estimate the influence of elevation, on the accuracy of the classification. The resulting classes at Level 4, based on associations, were aggregated at three levels - Level 3 (11 classes), Level 2 (8 classes) and Level 1 (5 classes) - based on species pool, site conditions and structure, and the accuracies were assessed. The classes were also aggregated based on Natura 2000 habitat types to assess the accuracy of the classification, and its usefulness for the monitoring of habitat quality. The vegetation could be classified into dry grasslands, wetlands, weeds, woody species and man-made features, at Level 1, with an accuracy of 0.79 (Cohen's kappa coefficient, κ). The accuracies at Levels 2-4 and the classification based on the Natura 2000 habitat types were κ: 0.76, 0.61, 0.51 and 0.69, respectively. Levels 1 and 2 provide suitable information for nature conservationists and land managers, while Levels 3 and 4 are especially useful for ecologists, geologists and soil scientists as they provide high resolution data on species distribution, vegetation patterns, soil properties and on their correlations. Including the DTM-based variables increased the accuracy (κ) from 0.73 to 0.79 for Level 1. These findings show that the structural and spectral attributes of ALS echoes can be used for the classification of open landscapes, especially those where vegetation is influenced by elevation, such as coastal salt marshes, sand dunes, karst or alluvial areas; in these cases, ALS has a distinct advantage over other remotely sensed data.
NASA Astrophysics Data System (ADS)
Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo
2018-06-01
We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.
Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin
2015-09-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.
Development of the One Centimeter Accuracy Geoid Model of Latvia for GNSS Measurements
NASA Astrophysics Data System (ADS)
Balodis, J.; Silabriedis, G.; Haritonova, D.; Kaļinka, M.; Janpaule, I.; Morozova, K.; Jumāre, I.; Mitrofanovs, I.; Zvirgzds, J.; Kaminskis, J.; Liepiņš, I.
2015-11-01
There is an urgent necessity for a highly accurate and reliable geoid model to enable prompt determination of normal height with the use of GNSS coordinate determination due to the high precision requirements in geodesy, building and high precision road construction development. Additionally, the Latvian height system is in the process of transition from BAS- 77 (Baltic Height System) to EVRS2007 system. The accuracy of the geoid model must approach the precision of about ∼1 cm looking forward to the Baltic Rail and other big projects. The use of all the available and verified data sources is planned, including the use of enlarged set of GNSS/levelling data, gravimetric measurement data and, additionally, the vertical deflection measurements over the territory of Latvia. The work is going ahead stepwise. Just the issue of GNSS reference network stability is discussed. In order to achieve the ∼1 cm precision geoid, it is required to have a homogeneous high precision GNSS network as a basis for ellipsoidal height determination for GNSS/levelling points. Both the LatPos and EUPOS® - Riga network have been examined in this article.
Wingenbach, Tanja S. H.
2016-01-01
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author. PMID:26784347
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
2016-01-01
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
NASA Technical Reports Server (NTRS)
Sandford, Stephen P.
2010-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in the large time/space scale averages that are key to understanding decadal changes.
Lyons, Mark; Al-Nakeeb, Yahya; Nevill, Alan
2006-01-01
Despite the acknowledged importance of fatigue on performance in sport, ecologically sound studies investigating fatigue and its effects on sport-specific skills are surprisingly rare. The aim of this study was to investigate the effect of moderate and high intensity total body fatigue on passing accuracy in expert and novice basketball players. Ten novice basketball players (age: 23.30 ± 1.05 yrs) and ten expert basketball players (age: 22.50 ± 0.41 yrs) volunteered to participate in the study. Both groups performed the modified AAHPERD Basketball Passing Test under three different testing conditions: rest, moderate intensity and high intensity total body fatigue. Fatigue intensity was established using a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant (F 2,36 = 5.252, p = 0.01) level of fatigue by level of skill interaction. On examination of the mean scores it is clear that following high intensity total body fatigue there is a significant detriment in the passing performance of both novice and expert basketball players when compared to their resting scores. Fundamentally however, the detrimental impact of fatigue on passing performance is not as steep in the expert players compared to the novice players. The results suggest that expert or skilled players are better able to cope with both moderate and high intensity fatigue conditions and maintain a higher level of performance when compared to novice players. The findings of this research therefore, suggest the need for trainers and conditioning coaches in basketball to include moderate, but particularly high intensity exercise into their skills sessions. This specific training may enable players at all levels of the game to better cope with the demands of the game on court and maintain a higher standard of play. Key Points Aim: to investigate the effect of moderate and high intensity total body fatigue on basketball-passing accuracy in expert and novice basketball players. Fatigue intensity was set as a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant level of fatigue by level of skill interaction. Despite a significant detriment in passing-performance in both novice and expert players following high intensity total body fatigue, this detriment was not as steep in the expert players when compared to the novice players PMID:24259994
Kumar, Ravindra; Kumari, Bandana; Srivastava, Abhishikha; Kumar, Manish
2014-10-29
Nuclear receptor proteins (NRP) are transcription factor that regulate many vital cellular processes in animal cells. NRPs form a super-family of phylogenetically related proteins and divided into different sub-families on the basis of ligand characteristics and their functions. In the post-genomic era, when new proteins are being added to the database in a high-throughput mode, it becomes imperative to identify new NRPs using information from amino acid sequence alone. In this study we report a SVM based two level prediction systems, NRfamPred, using dipeptide composition of proteins as input. At the 1st level, NRfamPred screens whether the query protein is NRP or non-NRP; if the query protein belongs to NRP class, prediction moves to 2nd level and predicts the sub-family. Using leave-one-out cross-validation, we were able to achieve an overall accuracy of 97.88% at the 1st level and an overall accuracy of 98.11% at the 2nd level with dipeptide composition. Benchmarking on independent datasets showed that NRfamPred had comparable accuracy to other existing methods, developed on the same dataset. Our method predicted the existence of 76 NRPs in the human proteome, out of which 14 are novel NRPs. NRfamPred also predicted the sub-families of these 14 NRPs.
A space system for high-accuracy global time and frequency comparison of clocks
NASA Technical Reports Server (NTRS)
Decher, R.; Allan, D. W.; Alley, C. O.; Vessot, R. F. C.; Winkler, G. M. R.
1981-01-01
A Space Shuttle experiment in which a hydrogen maser clock on board the Space Shuttle will be compared with clocks on the ground using two-way microwave and short pulse laser signals is described. The accuracy goal for the experiment is 1 nsec or better for the time transfer and 10 to the minus 14th power for the frequency comparison. A direct frequency comparison of primary standards at the 10 to the minus 14th power accuracy level is a unique feature of the proposed system. Both time and frequency transfer will be accomplished by microwave transmission, while the laser signals provide calibration of the system as well as subnanosecond time transfer.
Principles of operation, accuracy and precision of an Eye Surface Profiler.
Iskander, D Robert; Wachel, Pawel; Simpson, Patrick N D; Consejo, Alejandra; Jesus, Danilo A
2016-05-01
To introduce a newly developed instrument for measuring the topography of the anterior eye, provide principles of its operation and to assess its accuracy and precision. The Eye Surface Profiler is a new technology based on Fourier transform profilometry for measuring the anterior eye surface encompassing the corneo-scleral area. Details of technical principles of operation are provided for the particular case of sequential double fringe projection. Technical limits of accuracy have been assessed for several key parameters such as the carrier frequency, image quantisation level, sensor size, carrier frequency inaccuracy, and level and type of noise. Further, results from both artificial test surfaces as well as real eyes are used to assess precision and accuracy of the device (here benchmarked against one of popular Placido disk videokeratoscopes). Technically, the Eye Surface Profiler accuracy can reach levels below 1 μm for a range of considered key parameters. For the unit tested and using calibrated artificial surfaces, the accuracy of measurement (in terms of RMS error) was below 10 μm for a central measurement area of 8 mm diameter and below 40 μm for an extended measurement area of 16 mm. In some cases, the error reached levels of up to 200 μm at the very periphery of the measured surface (up to 20 mm). The SimK estimates of the test surfaces from the Eye Surface Profiler were in close agreement with those from a Placido disk videokeratoscope with differences no greater than ±0.1 D. For real eyes, the benchmarked accuracy was within ±0.5D for both the spherical and cylindrical SimK components. The Eye Surface Profiler can successfully measure the topography of the entire anterior eye including the cornea, limbus and sclera. It has a great potential to become an optometry clinical tool that could substitute the currently used videokeratoscopes and provide a high quality corneo-scleral topography. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Ender, Andreas; Mehl, Albert
2014-01-01
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007
Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina
2013-01-01
In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.
Hegde, Rahul J; Khare, Sumedh Suhas; Saraf, Tanvi A; Trivedi, Sonal; Naidu, Sonal
2015-01-01
Dental formation is superior to eruption as a method of dental age (DA) assessment. Eruption is only a brief occurrence, whereas formation may be related at different chronologic age levels, thereby providing a precise index for determining DA. The study was designed to determine the nature of inter-relationship between chronologic and DA. Age estimation depending upon tooth formation was done by Demirjian method and accuracy of Demirjian method was also evaluated. The sample for the study consisted of 197 children of Navi Mumbai. Significant positive correlation was found between chronologic age and DA that is, (r = 0.995), (P < 0.0001) for boys and (r = 0.995), (P < 0.0001) for girls. When age estimation was done by Demirjian method, mean the difference between true age (chronologic age) and assessed (DA) was 2 days for boys and 37 days for girls. Demirjian method showed high accuracy when applied to Navi Mumbai (Maharashtra - India) population. Demirjian method showed high accuracy when applied to Navi Mumbai (Maharashtra - India) population.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Pakkala, T; Kuusela, L; Ekholm, M; Wenzel, A; Haiter-Neto, F; Kortesniemi, M
2012-01-01
In clinical practice, digital radiographs taken for caries diagnostics are viewed on varying types of displays and usually in relatively high ambient lighting (room illuminance) conditions. Our purpose was to assess the effect of room illuminance and varying display types on caries diagnostic accuracy in digital dental radiographs. Previous studies have shown that the diagnostic accuracy of caries detection is significantly better in reduced lighting conditions. Our hypothesis was that higher display luminance could compensate for this in higher ambient lighting conditions. Extracted human teeth with approximal surfaces clinically ranging from sound to demineralized were radiographed and evaluated by 3 observers who detected carious lesions on 3 different types of displays in 3 different room illuminance settings ranging from low illumination, i.e. what is recommended for diagnostic viewing, to higher illumination levels corresponding to those found in an average dental office. Sectioning and microscopy of the teeth validated the presence or absence of a carious lesion. Sensitivity, specificity and accuracy were calculated for each modality and observer. Differences were estimated by analyzing the binary data assuming the added effects of observer and modality in a generalized linear model. The observers obtained higher sensitivities in lower illuminance settings than in higher illuminance settings. However, this was related to a reduction in specificity, which meant that there was no significant difference in overall accuracy. Contrary to our hypothesis, there were no significant differences between the accuracy of different display types. Therefore, different displays and room illuminance levels did not affect the overall accuracy of radiographic caries detection. Copyright © 2012 S. Karger AG, Basel.
A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows
Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...
2015-03-11
High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less
Semantic Segmentation of Forest Stands of Pure Species as a Global Optimization Problem
NASA Astrophysics Data System (ADS)
Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet-Brunet, V.
2017-05-01
Forest stand delineation is a fundamental task for forest management purposes, that is still mainly manually performed through visual inspection of geospatial (very) high spatial resolution images. Stand detection has been barely addressed in the literature which has mainly focused, in forested environments, on individual tree extraction and tree species classification. From a methodological point of view, stand detection can be considered as a semantic segmentation problem. It offers two advantages. First, one can retrieve the dominant tree species per segment. Secondly, one can benefit from existing low-level tree species label maps from the literature as a basis for high-level object extraction. Thus, the semantic segmentation issue becomes a regularization issue in a weakly structured environment and can be formulated in an energetical framework. This papers aims at investigating which regularization strategies of the literature are the most adapted to delineate and classify forest stands of pure species. Both airborne lidar point clouds and multispectral very high spatial resolution images are integrated for that purpose. The local methods (such as filtering and probabilistic relaxation) are not adapted for such problem since the increase of the classification accuracy is below 5%. The global methods, based on an energy model, tend to be more efficient with an accuracy gain up to 15%. The segmentation results using such models have an accuracy ranging from 96% to 99%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yufeng; Tolic, Nikola; Purvine, Samuel O.
2011-11-07
The peptidome (i.e. processed and degraded forms of proteins) of e.g. blood can potentially provide insights into disease processes, as well as a source of candidate biomarkers that are unobtainable using conventional bottom-up proteomics approaches. MS dissociation methods, including CID, HCD, and ETD, can each contribute distinct identifications using conventional peptide identification methods (Shen et al. J. Proteome Res. 2011), but such samples still pose significant analysis and informatics challenges. In this work, we explored a simple approach for better utilization of high accuracy fragment ion mass measurements provided e.g. by FT MS/MS and demonstrate significant improvements relative to conventionalmore » descriptive and probabilistic scores methods. For example, at the same FDR level we identified 20-40% more peptides than SEQUEST and Mascot scoring methods using high accuracy fragment ion information (e.g., <10 mass errors) from CID, HCD, and ETD spectra. Species identified covered >90% of all those identified from SEQUEST, Mascot, and MS-GF scoring methods. Additionally, we found that the merging the different fragment spectra provided >60% more species using the UStags method than achieved previously, and enabled >1000 peptidome components to be identified from a single human blood plasma sample with a 0.6% peptide-level FDR, and providing an improved basis for investigation of potentially disease-related peptidome components.« less
Performance of the fiber-optic low-coherent ground settlement sensor: From lab to field
NASA Astrophysics Data System (ADS)
Guo, Jingjing; Tan, Yanbin; Peng, Li; Chen, Jisong; Wei, Chuanjun; Zhang, Pinglei; Zhang, Tianhang; Alrabeei, Salah; Zhang, Zhe; Sun, Changsen
2018-04-01
A fiber-optic low-coherent interferometry sensor was developed to measure the ground settlement (GS) in an accuracy of the micrometer. The sensor combined optical techniques with liquid-contained chambers that were hydraulically connected together at the bottom by using a water-filled tube. The liquid surface inside each chamber was at the same level initially. The optical interferometry was employed to read out the liquid level changes, which following the GS happened at the place where the chamber was put on and, thereby, the GS information was calculated. The laboratory effort had demonstrated its potential in the practical application. Here, the denoising algorithms on the measurement signal were carried out based on the specific environment to ensure the accuracy and stability of the system in field applications. After that, we extended this technique to the high-speed railway. The 5-days continuous measurement proved that the designed system could be applied to monitor the GS of the high-speed railway piers and approached an accuracy of ±70 μm in the field situation with a reference compensation sensor. So the performance of the sensor was suitable to the GS monitoring problem in the high-speed railway. There, the difficulties were to meet the monitoring requirement of both a large span in space and its quite tiny and slow changes.
Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto
2017-01-01
The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision. PMID:29186851
Sun, Rui; Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto
2017-11-25
The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision.
Schaufele, Fred
2013-01-01
Förster resonance energy transfer (FRET) between fluorescent proteins (FPs) provides insights into the proximities and orientations of FPs as surrogates of the biochemical interactions and structures of the factors to which the FPs are genetically fused. As powerful as FRET methods are, technical issues have impeded their broad adoption in the biologic sciences. One hurdle to accurate and reproducible FRET microscopy measurement stems from variable fluorescence backgrounds both within a field and between different fields. Those variations introduce errors into the precise quantification of fluorescence levels on which the quantitative accuracy of FRET measurement is highly dependent. This measurement error is particularly problematic for screening campaigns since minimal well-to-well variation is necessary to faithfully identify wells with altered values. High content screening depends also upon maximizing the numbers of cells imaged, which is best achieved by low magnification high throughput microscopy. But, low magnification introduces flat-field correction issues that degrade the accuracy of background correction to cause poor reproducibility in FRET measurement. For live cell imaging, fluorescence of cell culture media in the fluorescence collection channels for the FPs commonly used for FRET analysis is a high source of background error. These signal-to-noise problems are compounded by the desire to express proteins at biologically meaningful levels that may only be marginally above the strong fluorescence background. Here, techniques are presented that correct for background fluctuations. Accurate calculation of FRET is realized even from images in which a non-flat background is 10-fold higher than the signal. PMID:23927839
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
NASA Astrophysics Data System (ADS)
Vuković, Josip; Kos, Tomislav
2017-10-01
The ionosphere introduces positioning error in Global Navigation Satellite Systems (GNSS). There are several approaches for minimizing the error, with various levels of accuracy and different extents of coverage area. To model the state of the ionosphere in a region containing low number of reference GNSS stations, a locally adapted NeQuick 2 model can be used. Data ingestion updates the model with local level of ionization, enabling it to follow the observed changes of ionization levels. The NeQuick 2 model was adapted to local reference Total Electron Content (TEC) data using single station approach and evaluated using calibrated TEC data derived from 41 testing GNSS stations distributed around the data ingestion point. Its performance was observed in European middle latitudes in different ionospheric conditions of the period between 2011 and 2015. The modelling accuracy was evaluated in four azimuthal quadrants, with coverage radii calculated for three error thresholds: 12, 6 and 3 TEC Units (TECU). Diurnal radii change was observed for groups of days within periods of low and high solar activity and different seasons of the year. The statistical analysis was conducted on those groups of days, revealing trends in each of the groups, similarities between days within groups and the 95th percentile radii as a practically applicable measure of model performance. In almost all cases the modelling accuracy was better than 12 TECU, having the biggest radius from the data ingestion point. Modelling accuracy better than 6 TECU was achieved within reduced radius in all observed periods, while accuracy better than 3 TECU was reached only in summer. The calculated radii and interpolated error levels were presented on maps. That was especially useful in analyzing the model performance during the strongest geomagnetic storms of the observed period, with each of them having unique development and influence on model accuracy. Although some of the storms severely degraded the model accuracy, during most of the disturbed periods the model could be used, but with lower accuracy than in the quiet geomagnetic conditions. The comprehensive analysis of locally adapted NeQuick 2 model performance highlighted the challenges of using the single point data ingestion applied to a large region in middle latitudes and determined the achievable radii for different error thresholds in various ionospheric conditions.
Direct Detection Doppler Lidar for Spaceborne Wind Measurement
NASA Technical Reports Server (NTRS)
Korb, C. Laurence; Flesia, Cristina
1999-01-01
The theory of double edge lidar techniques for measuring the atmospheric wind using aerosol and molecular backscatter is described. Two high spectral resolution filters with opposite slopes are located about the laser frequency for the aerosol based measurement or in the wings of the Rayleigh - Brillouin profile for the molecular measurement. This doubles the signal change per unit Doppler shift and improves the measurement accuracy by nearly a factor of 2 relative to the single edge technique. For the aerosol based measurement, the use of two high resolution edge filters reduces the effects of background, Rayleigh scattering, by as much as an order of magnitude and substantially improves the measurement accuracy. Also, we describe a method that allows the Rayleigh and aerosol components of the signal to be independently determined. A measurement accuracy of 1.2 m/s can be obtained for a signal level of 1000 detected photons which corresponds to signal levels in the boundary layer. For the molecular based measurement, we describe the use of a crossover region where the sensitivity of a molecular and aerosol-based measurement are equal. This desensitizes the molecular measurement to the effects of aerosol scattering and greatly simplifies the measurement. Simulations using a conical scanning spaceborne lidar at 355 nm give an accuracy of 2-3 m/s for altitudes of 2-15 km for a 1 km vertical resolution, a satellite altitude of 400 km, and a 200 km x 200 km spatial.
ERIC Educational Resources Information Center
Thompson, G. Brian; McKay, Michael F.; Fletcher-Flinn, Claire M.; Connelly, Vincent; Kaa, Richard T.; Ewing, Jason
2008-01-01
Two studies were conducted across three countries to examine samples of beginning readers without systematic explicit phonics who had reached the same level of word reading accuracy as comparison samples with high and moderate explicit phonics. Had they employed any compensatory learning to reach that level? Four hypotheses of compensatory…
Online resources for shoulder instability: what are patients reading?
Garcia, Grant H; Taylor, Samuel A; Dy, Christopher J; Christ, Alexander; Patel, Ronak M; Dines, Joshua S
2014-10-15
Evaluations of the medical literature suggest that many online sites provide poor-quality information. The purpose of our study was to investigate the value of online resources for patient education about shoulder instability. Three search terms ("shoulder instability," "loose shoulder," and "shoulder dislocation") were entered into three Internet search engines. Three orthopaedic residents independently gauged the quality and accuracy of the information with use of a set of predetermined scoring criteria, in addition to noting whether or not four potential surgery options were mentioned. The readability of the web sites was evaluated with use of the Flesch-Kincaid score. Eighty-two unique web sites were evaluated. Quality and accuracy were significantly higher with use of the term "shoulder instability" compared with the term "loose shoulder" (quality, p < 0.001; accuracy, p = 0.001). However, the reading level was significantly more advanced for the "shoulder instability" web sites (p < 0.001). Quality was significantly higher on web sites with reading levels above the eighth grade level (p = 0.001) (88% of web sites). Only twenty-three sites (28%) mentioned surgical options for shoulder instability, and of these, only eight mentioned thermal capsulorrhaphy as a primary treatment. Online information regarding shoulder instability is often inaccurate and/or at an inappropriately high reading level. The quality of information is highly dependent on the specific search term used. Clinicians need to be aware of the information that is available online and should help direct patients to proper sites and guide Internet search terms. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Application of a territorial-based filtering algorithm in turbomachinery blade design optimization
NASA Astrophysics Data System (ADS)
Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François
2017-02-01
A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.
Behairy, Noha H.; Dorgham, Mohsen A.
2008-01-01
The aim of this study was to detect the accuracy of routine magnetic resonance imaging (MRI) done in different centres and its agreement with arthroscopy in meniscal and ligamentous injuries of the knee. We prospectively examined 70 patients ranging in age between 22 and 59 years. History taking, plain X-ray, clinical examination, routine MRI and arthroscopy were done for all patients. Sensitivity, specificity, accuracy, positive and negative predictive values, P value and kappa agreement measures were calculated. We found a sensitivity of 47 and 100%, specificity of 95 and 75% and accuracy of 73 and 78.5%, respectively, for the medial and lateral meniscus. A sensitivity of 77.8%, specificity of 100% and accuracy of 94% was noted for the anterior cruciate ligament (ACL). We found good kappa agreements (0.43 and 0.45) for both menisci and excellent agreement (0.84) for the ACL. MRI shows high accuracy and should be used as the primary diagnostic tool for selection of candidates for arthroscopy. Level of evidence: 4. PMID:18506445
Shepherd, T; Teras, M; Beichel, RR; Boellaard, R; Bruynooghe, M; Dicken, V; Gooding, MJ; Julyan, PJ; Lee, JA; Lefèvre, S; Mix, M; Naranjo, V; Wu, X; Zaidi, H; Zeng, Z; Minn, H
2017-01-01
The impact of positron emission tomography (PET) on radiation therapy is held back by poor methods of defining functional volumes of interest. Many new software tools are being proposed for contouring target volumes but the different approaches are not adequately compared and their accuracy is poorly evaluated due to the ill-definition of ground truth. This paper compares the largest cohort to date of established, emerging and proposed PET contouring methods, in terms of accuracy and variability. We emphasize spatial accuracy and present a new metric that addresses the lack of unique ground truth. Thirty methods are used at 13 different institutions to contour functional volumes of interest in clinical PET/CT and a custom-built PET phantom representing typical problems in image guided radiotherapy. Contouring methods are grouped according to algorithmic type, level of interactivity and how they exploit structural information in hybrid images. Experiments reveal benefits of high levels of user interaction, as well as simultaneous visualization of CT images and PET gradients to guide interactive procedures. Method-wise evaluation identifies the danger of over-automation and the value of prior knowledge built into an algorithm. PMID:22692898
A portable meter for measuring low frequency currents in the human body.
Niple, J C; Daigle, J P; Zaffanella, L E; Sullivan, T; Kavet, R
2004-07-01
A portable meter has been developed for measuring low frequency currents that flow in the human body. Although the present version of the meter was specifically designed to measure 50/60 Hz "contact currents," the principles involved can be used with other low frequency body currents. Contact currents flow when the human body provides a conductive path between objects in the environment with different electrical potentials. The range of currents the meter detects is approximately 0.4-800 microA. This provides measurements of currents from the threshold of human perception (approximately 500 microA(RMS)) down to single microampere levels. The meter has a unique design, which utilizes the human subject's body impedance as the sensing element. Some of the advantages of this approach are high sensitivity, the ability to measure current flow in the majority of the body, and relative insensitivity to the current path connection points. Current measurement accuracy varies with the accuracy of the body impedance (resistance) measurement and different techniques can be used to obtain a desired level of accuracy. Techniques are available to achieve an estimated +/-20% accuracy. Copyright 2004 Wiley-Liss, Inc.
Fujisada, H.; Bailey, G.B.; Kelly, Glen G.; Hara, S.; Abrams, M.J.
2005-01-01
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data.
Mapping urban forest tree species using IKONOS imagery: preliminary results.
Pu, Ruiliang
2011-01-01
A stepwise masking system with high-resolution IKONOS imagery was developed to identify and map urban forest tree species/groups in the City of Tampa, Florida, USA. The eight species/groups consist of sand live oak (Quercus geminata), laurel oak (Quercus laurifolia), live oak (Quercus virginiana), magnolia (Magnolia grandiflora), pine (species group), palm (species group), camphor (Cinnamomum camphora), and red maple (Acer rubrum). The system was implemented with soil-adjusted vegetation index (SAVI) threshold, textural information after running a low-pass filter, and brightness threshold of NIR band to separate tree canopies from non-vegetated areas from other vegetation types (e.g., grass/lawn) and to separate the tree canopies into sunlit and shadow areas. A maximum likelihood classifier was used to identify and map forest type and species. After IKONOS imagery was preprocessed, a total of nine spectral features were generated, including four spectral bands, three hue-intensity-saturation indices, one SAVI, and one texture image. The identified and mapped results were examined with independent ground survey data. The experimental results indicate that when classifying all the eight tree species/ groups with the high-resolution IKONOS image data, the identifying accuracy was very low and could not satisfy a practical application level, and when merging the eight species/groups into four major species/groups, the average accuracy is still low (average accuracy = 73%, overall accuracy = 86%, and κ = 0.76 with sunlit test samples). Such a low accuracy of identifying and mapping the urban tree species/groups is attributable to low spatial resolution IKONOS image data relative to tree crown size, to complex and variable background spectrum impact on crown spectra, and to shadow/shaded impact. The preliminary results imply that to improve the tree species identification accuracy and achieve a practical application level in urban area, multi-temporal (multi-seasonal) or hyperspectral data image data should be considered for use in the future.
Using Bluetooth proximity sensing to determine where office workers spend time at work.
Clark, Bronwyn K; Winkler, Elisabeth A; Brakenridge, Charlotte L; Trost, Stewart G; Healy, Genevieve N
2018-01-01
Most wearable devices that measure movement in workplaces cannot determine the context in which people spend time. This study examined the accuracy of Bluetooth sensing (10-second intervals) via the ActiGraph GT9X Link monitor to determine location in an office setting, using two simple, bespoke algorithms. For one work day (mean±SD 6.2±1.1 hours), 30 office workers (30% men, aged 38±11 years) simultaneously wore chest-mounted cameras (video recording) and Bluetooth-enabled monitors (initialised as receivers) on the wrist and thigh. Additional monitors (initialised as beacons) were placed in the entry, kitchen, photocopy room, corridors, and the wearer's office. Firstly, participant presence/absence at each location was predicted from the presence/absence of signals at that location (ignoring all other signals). Secondly, using the information gathered at multiple locations simultaneously, a simple heuristic model was used to predict at which location the participant was present. The Bluetooth-determined location for each algorithm was tested against the camera in terms of F-scores. When considering locations individually, the accuracy obtained was excellent in the office (F-score = 0.98 and 0.97 for thigh and wrist positions) but poor in other locations (F-score = 0.04 to 0.36), stemming primarily from a high false positive rate. The multi-location algorithm exhibited high accuracy for the office location (F-score = 0.97 for both wear positions). It also improved the F-scores obtained in the remaining locations, but not always to levels indicating good accuracy (e.g., F-score for photocopy room ≈0.1 in both wear positions). The Bluetooth signalling function shows promise for determining where workers spend most of their time (i.e., their office). Placing beacons in multiple locations and using a rule-based decision model improved classification accuracy; however, for workplace locations visited infrequently or with considerable movement, accuracy was below desirable levels. Further development of algorithms is warranted.
Sea-level highstands during the Last interglacial (MIS 5e) in Mallorca
NASA Astrophysics Data System (ADS)
Lorscheid, T.; Stocchi, P.; Rovere, A.; Gómez-Pujol, L.; Mann, T.; Fornos, J. J.
2015-12-01
Last Intergalcial in the island of Mallorca (NW Mediterranean) have been the subject of research since the early 60's (Butzer & Cuerda 1960). Despite both the location and stratigraphy of MIS 5e outcrops in the island are well known, the elevation of relative sea level (RSL) markers around the island has never been measured with high-accuracy topographic techniques (e.g. DGPS) and the interpretation of the paleo RSL has never been carried out using standardized definition of the indicative meaning of each RSL marker. In this study we present the results of two field trips aimed at measuring last interglacial deposits in Mallorca with high-accuracy GPS and at establishing, surveying modern shorelines as analogs, indicative ranges and reference water level values for RSL markers across the island. Using an earth-ice coupled GIA-model we performed several model-runs for investigating isostatic adjustment since MIS 5e in the island. These results are compared with the elevation of our deposits in the field and discussed in terms of tectonics and eustasy.
Training and quality assurance with the Structured Clinical Interview for DSM-IV (SCID-I/P).
Ventura, J; Liberman, R P; Green, M F; Shaner, A; Mintz, J
1998-06-15
Accuracy in psychiatric diagnosis is critical for evaluating the suitability of the subjects for entry into research protocols and for establishing comparability of findings across study sites. However, training programs in the use of diagnostic instruments for research projects are not well systematized. Furthermore, little information has been published on the maintenance of interrater reliability of diagnostic assessments. At the UCLA Research Center for Major Mental Illnesses, a Training and Quality Assurance Program for SCID interviewers was used to evaluate interrater reliability and diagnostic accuracy. Although clinically experienced interviewers achieved better interrater reliability and overall diagnostic accuracy than neophyte interviewers, both groups were able to achieve and maintain high levels of interrater reliability, diagnostic accuracy, and interviewer skill. At the first quality assurance check after training, there were no significant differences between experienced and neophyte interviewers in interrater reliability or diagnostic accuracy. Standardization of training and quality assurance procedures within and across research projects may make research findings from study sites more comparable.
A Micro-Resonant Gas Sensor with Nanometer Clearance between the Pole Plates
Xu, Lizhong
2018-01-01
In micro-resonant gas sensors, the capacitive detection is widely used because of its simple structure. However, its shortcoming is a weak signal output caused by a small capacitance change. Here, we reduced the initial clearance between the pole plates to the nanometer level, and increased the capacitance between the pole plates and its change during resonator vibration. We propose a fabricating process of the micro-resonant gas sensor by which the initial clearance between the pole plates is reduced to the nanometer level and a micro-resonant gas sensor with 200 nm initial clearance is fabricated. With this sensor, the resonant frequency shifts were measured when they were exposed to several different vapors, and high detection accuracies were obtained. The detection accuracy with respect to ethanol vapor was 0.4 ppm per Hz shift, and the detection accuracy with respect to hydrogen and ammonias vapors was 3 ppm and 0.5 ppm per Hz shift, respectively. PMID:29373546
A Micro-Resonant Gas Sensor with Nanometer Clearance between the Pole Plates.
Fu, Xiaorui; Xu, Lizhong
2018-01-26
In micro-resonant gas sensors, the capacitive detection is widely used because of its simple structure. However, its shortcoming is a weak signal output caused by a small capacitance change. Here, we reduced the initial clearance between the pole plates to the nanometer level, and increased the capacitance between the pole plates and its change during resonator vibration. We propose a fabricating process of the micro-resonant gas sensor by which the initial clearance between the pole plates is reduced to the nanometer level and a micro-resonant gas sensor with 200 nm initial clearance is fabricated. With this sensor, the resonant frequency shifts were measured when they were exposed to several different vapors, and high detection accuracies were obtained. The detection accuracy with respect to ethanol vapor was 0.4 ppm per Hz shift, and the detection accuracy with respect to hydrogen and ammonias vapors was 3 ppm and 0.5 ppm per Hz shift, respectively.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
Stress and emotional valence effects on children's versus adolescents' true and false memory.
Quas, Jodi A; Rush, Elizabeth B; Yim, Ilona S; Edelstein, Robin S; Otgaar, Henry; Smeets, Tom
2016-01-01
Despite considerable interest in understanding how stress influences memory accuracy and errors, particularly in children, methodological limitations have made it difficult to examine the effects of stress independent of the effects of the emotional valence of to-be-remembered information in developmental populations. In this study, we manipulated stress levels in 7-8- and 12-14-year-olds and then exposed them to negative, neutral, and positive word lists. Shortly afterward, we tested their recognition memory for the words and false memory for non-presented but related words. Adolescents in the high-stress condition were more accurate than those in the low-stress condition, while children's accuracy did not differ across stress conditions. Also, among adolescents, accuracy and errors were higher for the negative than positive words, while in children, word valence was unrelated to accuracy. Finally, increases in children's and adolescents' cortisol responses, especially in the high-stress condition, were related to greater accuracy but not false memories and only for positive emotional words. Findings suggest that stress at encoding, as well as the emotional content of to-be-remembered information, may influence memory in different ways across development, highlighting the need for greater complexity in existing models of true and false memory formation.
A Flexible Analysis Tool for the Quantitative Acoustic Assessment of Infant Cry
Reggiannini, Brian; Sheinkopf, Stephen J.; Silverman, Harvey F.; Li, Xiaoxue; Lester, Barry M.
2015-01-01
Purpose In this article, the authors describe and validate the performance of a modern acoustic analyzer specifically designed for infant cry analysis. Method Utilizing known algorithms, the authors developed a method to extract acoustic parameters describing infant cries from standard digital audio files. They used a frame rate of 25 ms with a frame advance of 12.5 ms. Cepstral-based acoustic analysis proceeded in 2 phases, computing frame-level data and then organizing and summarizing this information within cry utterances. Using signal detection methods, the authors evaluated the accuracy of the automated system to determine voicing and to detect fundamental frequency (F0) as compared to voiced segments and pitch periods manually coded from spectrogram displays. Results The system detected F0 with 88% to 95% accuracy, depending on tolerances set at 10 to 20 Hz. Receiver operating characteristic analyses demonstrated very high accuracy at detecting voicing characteristics in the cry samples. Conclusions This article describes an automated infant cry analyzer with high accuracy to detect important acoustic features of cry. A unique and important aspect of this work is the rigorous testing of the system’s accuracy as compared to ground-truth manual coding. The resulting system has implications for basic and applied research on infant cry development. PMID:23785178
Manufacture of ultra high precision aerostatic bearings based on glass guide
NASA Astrophysics Data System (ADS)
Guo, Meng; Dai, Yifan; Peng, Xiaoqiang; Tie, Guipeng; Lai, Tao
2017-10-01
The aerostatic guide in the traditional three-coordinate measuring machine and profilometer generally use metal or ceramics material. Limited by the guide processing precision, the measurement accuracy of these traditional instruments is around micro-meter level. By selection of optical materials as guide material, optical processing method and laser interference measurement can be introduced to the traditional aerostatic bearings manufacturing field. By using the large aperture wave-front interference measuring equipment , the shape and position error of the glass guide can be obtained in high accuracy and then it can be processed to 0.1μm or even better with the aid of Magnetorheological Finishing(MRF) and Computer Controlled Optical Surfacing (CCOS) process and other modern optical processing method, so the accuracy of aerostatic bearings can be fundamentally improved and ultra high precision coordinate measuring can be achieved. This paper introduces the fabrication and measurement process of the glass guide by K9 with 300mm measuring range, and its working surface accuracy is up to 0.1μm PV, the verticality and parallelism error between the two guide rail face is better than 2μm, and the straightness of the aerostatic bearings by this K9 glass guide is up to 40nm after error compensation.
Gas-phase conformations of 2-methyl-1,3-dithiolane investigated by microwave spectroscopy
NASA Astrophysics Data System (ADS)
Van, Vinh; Stahl, Wolfgang; Schwell, Martin; Nguyen, Ha Vinh Lam
2018-03-01
The conformational analysis of 2-methyl-1,3-dithiolane using quantum chemical calculations at some levels of theory yielded only one stable conformer with envelope geometry. However, other levels of theory indicated two envelope conformers. Analysis of the microwave spectrum recorded using two molecular jet Fourier transform microwave spectrometers covering the frequency range from 2 to 40 GHz confirms that only one conformer exists under jet conditions. The experimental spectrum was reproduced using a rigid-rotor model with centrifugal distortion correction within the measurement accuracy of 1.5 kHz, and molecular parameters were determined with very high accuracy. The gas phase structure of the title molecule is compared with the structures of other related molecules studied under the same experimental conditions.
Research of three level match method about semantic web service based on ontology
NASA Astrophysics Data System (ADS)
Xiao, Jie; Cai, Fang
2011-10-01
An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.
Real-Time Single Frequency Precise Point Positioning Using SBAS Corrections
Li, Liang; Jia, Chun; Zhao, Lin; Cheng, Jianhua; Liu, Jianxu; Ding, Jicheng
2016-01-01
Real-time single frequency precise point positioning (PPP) is a promising technique for high-precision navigation with sub-meter or even centimeter-level accuracy because of its convenience and low cost. The navigation performance of single frequency PPP heavily depends on the real-time availability and quality of correction products for satellite orbits and satellite clocks. Satellite-based augmentation system (SBAS) provides the correction products in real-time, but they are intended to be used for wide area differential positioning at 1 meter level precision. By imposing the constraints for ionosphere error, we have developed a real-time single frequency PPP method by sufficiently utilizing SBAS correction products. The proposed PPP method are tested with static and kinematic data, respectively. The static experimental results show that the position accuracy of the proposed PPP method can reach decimeter level, and achieve an improvement of at least 30% when compared with the traditional SBAS method. The positioning convergence of the proposed PPP method can be achieved in 636 epochs at most in static mode. In the kinematic experiment, the position accuracy of the proposed PPP method can be improved by at least 20 cm relative to the SBAS method. Furthermore, it has revealed that the proposed PPP method can achieve decimeter level convergence within 500 s in the kinematic mode. PMID:27517930
Visual communication with Haitian women: a look at pictorial literacy.
Gustafson, M B
1986-06-01
A study of village women in Haiti which presents baseline data from their responses to stylized health education pictures is reported. The study questioned the concept that pictorial messages were accurately recognized and self-explanatory to nonliterate Haitian village women. The investigator, who used a descriptive survey, sought answers to a major and a related question: what do nonliterate Haitian village women recognize in selected health education pictures; and are their differences in picture recognition traceable to the complexity of the pictures. There were 110 women (25 from a mountain village, 25 from a plains village, 25 from a seacoast village, and 35 urban dwellers) who responded to 9 health education pictures. The women ranged in age from 18-80 years of age; 32 (29%) had gone to school for a range of an "unknown time" to 8 years. 47% of those who had gone to school indicated that they could read. The investigator rated the verbatim responses to the pictures for accuracy as: accurate, overinclusive, underinclusive, inaccurate, and do not know. The quantitative analysis of this data revealed that the accuracy levels decreased as the complexity level increased. This is best shown in the 129 (39%) accurate responses in the low level; 6 (1.8%) in the moderate level; and no accurate responses in the high complexity level. An unexpected finding was the highest number of inaccurate responses (n = 83, 25.1%) found in the low complexity level, while the moderate and high levels both showed 36 (10.8%). In addition to the differences in accuracy in picture recognition based on picture complexity, there were significant differences on the chi-square test which confirmed the assertion of the question that picture recognition is traceable to the complexity of the picture. These findings are consistent with the picture complexity studies of Holmes, Jelliffe, and Kwansa.
NASA Astrophysics Data System (ADS)
Wang, Y. M.; Becker, C.; Mader, G.; Martin, D.; Li, X.; Jiang, T.; Breidenbach, S.; Geoghegan, C.; Winester, D.; Guillaume, S.; Bürki, B.
2017-10-01
Three Geoid Slope Validation Surveys were planned by the National Geodetic Survey for validating geoid improvement gained by incorporating airborne gravity data collected by the "Gravity for the Redefinition of the American Vertical Datum" (GRAV-D) project in flat, medium and rough topographic areas, respectively. The first survey GSVS11 over a flat topographic area in Texas confirmed that a 1-cm differential accuracy geoid over baseline lengths between 0.4 and 320 km is achievable with GRAV-D data included (Smith et al. in J Geod 87:885-907, 2013). The second survey, Geoid Slope Validation Survey 2014 (GSVS14) took place in Iowa in an area with moderate topography but significant gravity variation. Two sets of geoidal heights were computed from GPS/leveling data and observed astrogeodetic deflections of the vertical at 204 GSVS14 official marks. They agree with each other at a {± }1.2 cm level, which attests to the high quality of the GSVS14 data. In total, four geoid models were computed. Three models combined the GOCO03/5S satellite gravity model with terrestrial and GRAV-D gravity with different strategies. The fourth model, called xGEOID15A, had no airborne gravity data and served as the benchmark to quantify the contribution of GRAV-D to the geoid improvement. The comparisons show that each model agrees with the GPS/leveling geoid height by 1.5 cm in mark-by-mark comparisons. In differential comparisons, all geoid models have a predicted accuracy of 1-2 cm at baseline lengths from 1.6 to 247 km. The contribution of GRAV-D is not apparent due to a 9-cm slope in the western 50-km section of the traverse for all gravimetric geoid models, and it was determined that the slopes have been caused by a 5 mGal bias in the terrestrial gravity data. If that western 50-km section of the testing line is excluded in the comparisons, then the improvement with GRAV-D is clearly evident. In that case, 1-cm differential accuracy on baselines of any length is achieved with the GRAV-D-enhanced geoid models and exhibits a clear improvement over the geoid models without GRAV-D data. GSVS14 confirmed that the geoid differential accuracies are in the 1-2 cm range at various baseline lengths. The accuracy increases to 1 cm with GRAV-D gravity when the west 50 km line is not included. The data collected by the surveys have high accuracy and have the potential to be used for validation of other geodetic techniques, e.g., the chronometric leveling. To reach the 1-cm height differences of the GSVS data, a clock with frequency accuracy of 10^{-18} is required. Using the GSVS data, the accuracy of ellipsoidal height differences can also be estimated.
Depressive symptoms in mothers and daughters: Attachment style moderates reporter agreement
Milan, Stephanie; Wortel, Sanne; Ramirez, Jenna; Oshin, Linda
2016-01-01
Parents and adolescents show only modest agreement when reporting on depressive symptoms. Drawing from attachment theory and previous research on informant discrepancies, we tested hypotheses about how adolescent attachment style may impact reporting agreement in a sample of 184 low-income mother-adolescent daughter dyads (adolescent mean age = 15.4 (SD = 1.05), maternal mean age = 41.4 (SD = 7.60); 58% Latina, 26% African-American/Black, 16% as non-Hispanic, White). Mothers and adolescents reported on their own and each others’ depressive symptoms and adolescents reported on attachment style. Using a moderated Actor Partner Interdependence Model (APIM) to calculate reporter bias and accuracy estimates, we tested whether attachment style moderated maternal and adolescent accuracy in theoretically consistent ways. Mothers and adolescents showed similar levels of accuracy and bias when reporting on each other. Consistent with hypotheses, we found that adolescents who reported high levels of preoccupation were less accurate when reporting on their mothers because they tended to observe symptoms that their mothers did not endorse. Conversely, mothers were the most accurate in these dyads, potentially because preoccupied adolescents tend to elevate displays of emotional distress. Reporting accuracy was not affected by a dismissive style. These results add to literature indicating that parent-child reporting discrepancies often reflect meaningful information about relationships, and highlight the need to consider different sources of reporting bias and accuracy in assessment and treatment. PMID:27130142
NASA Astrophysics Data System (ADS)
Lieu, Richard
2018-01-01
A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.
Cicero, Mark Xavier; Whitfill, Travis; Overly, Frank; Baird, Janette; Walsh, Barbara; Yarzebski, Jorge; Riera, Antonio; Adelgais, Kathleen; Meckler, Garth D; Baum, Carl; Cone, David Christopher; Auerbach, Marc
2017-01-01
Paramedics and emergency medical technicians (EMTs) triage pediatric disaster victims infrequently. The objective of this study was to measure the effect of a multiple-patient, multiple-simulation curriculum on accuracy of pediatric disaster triage (PDT). Paramedics, paramedic students, and EMTs from three sites were enrolled. Triage accuracy was measured three times (Time 0, Time 1 [two weeks later], and Time 2 [6 months later]) during a disaster simulation, in which high and low fidelity manikins and actors portrayed 10 victims. Accuracy was determined by participant triage decision concordance with predetermined expected triage level (RED [Immediate], YELLOW [Delayed], GREEN [Ambulatory], BLACK [Deceased]) for each victim. Between Time 0 and Time 1, participants completed an interactive online module, and after each simulation there was an individual debriefing. Associations between participant level of training, years of experience, and enrollment site were determined, as were instances of the most dangerous mistriage, when RED and YELLOW victims were triaged BLACK. The study enrolled 331 participants, and the analysis included 261 (78.9%) participants who completed the study, 123 from the Connecticut site, 83 from Rhode Island, and 55 from Massachusetts. Triage accuracy improved significantly from Time 0 to Time 1, after the educational interventions (first simulation with debriefing, and an interactive online module), with a median 10% overall improvement (p < 0.001). Subgroup analyses showed between Time 0 and Time 1, paramedics and paramedic students improved more than EMTs (p = 0.002). Analysis of triage accuracy showed greatest improvement in overall accuracy for YELLOW triage patients (Time 0 50% accurate, Time1 100%), followed by RED patients (Time 0 80%, Time 1 100%). There was no significant difference in accuracy between Time 1 and Time 2 (p = 0.073). This study shows that the multiple-victim, multiple-simulation curriculum yields a durable 10% improvement in simulated triage accuracy. Future iterations of the curriculum can target greater improvements in EMT triage accuracy.
Lin, Zhichao; Wu, Zhongyu
2009-05-01
A rapid and reliable radiochemical method coupled with a simple and compact plating apparatus was developed, validated, and applied for the analysis of (210)Po in variety of food products and bioassay samples. The method performance characteristics, including accuracy, precision, robustness, and specificity, were evaluated along with a detailed measurement uncertainty analysis. With high Po recovery, improved energy resolution, and effective removal of interfering elements by chromatographic extraction, the overall method accuracy was determined to be better than 5% with measurement precision of 10%, at 95% confidence level.
Geo-referenced digital data acquisition and processing system using LiDAR technology.
DOT National Transportation Integrated Search
2006-02-01
LiDAR technology, introduced in the late 90s, has received wide acceptance in airborne surveying as a leading : tool for obtaining high-quality surface data at decimeter-level vertical accuracy in an unprecedentedly short : turnaround time. State-of-...
Wang, Yan-peng; Gong, Qi; Yu, Sheng-rong; Liu, You-yan
2012-04-01
A method for detecting trace impurities in high concentration matrix by ICP-AES based on partial least squares (PLS) was established. The research showed that PLS could effectively correct the interference caused by high level of matrix concentration error and could withstand higher concentrations of matrix than multicomponent spectral fitting (MSF). When the mass ratios of matrix to impurities were from 1 000 : 1 to 20 000 : 1, the recoveries of standard addition were between 95% and 105% by PLS. For the system in which interference effect has nonlinear correlation with the matrix concentrations, the prediction accuracy of normal PLS method was poor, but it can be improved greatly by using LIN-PPLS, which was based on matrix transformation of sample concentration. The contents of Co, Pb and Ga in stream sediment (GBW07312) were detected by MSF, PLS and LIN-PPLS respectively. The results showed that the prediction accuracy of LIN-PPLS was better than PLS, and the prediction accuracy of PLS was better than MSF.
Testing of the high accuracy inertial navigation system in the Shuttle Avionics Integration Lab
NASA Technical Reports Server (NTRS)
Strachan, Russell L.; Evans, James M.
1991-01-01
The description, results, and interpretation is presented of comparison testing between the High Accuracy Inertial Navigation System (HAINS) and KT-70 Inertial Measurement Unit (IMU). The objective was to show the HAINS can replace the KT-70 IMU in the space shuttle Orbiter, both singularly and totally. This testing was performed in the Guidance, Navigation, and Control Test Station (GTS) of the Shuttle Avionics Integration Lab (SAIL). A variety of differences between the two instruments are explained. Four, 5 day test sessions were conducted varying the number and slot position of the HAINS and KT-70 IMUs. The various steps in the calibration and alignment procedure are explained. Results and their interpretation are presented. The HAINS displayed a high level of performance accuracy previously unseen with the KT-70 IMU. The most significant improvement of the performance came in the Tuned Inertial/Extended Launch Hold tests. The HAINS exceeded the 4 hr specification requirement. The results obtained from the SAIL tests were generally well beyond the requirements of the procurement specification.
Wilson, Glenn F; Russell, Christopher A
The functional state of the human operator is critical to optimal system performance. Degraded states of operator functioning can lead to errors and overall suboptimal system performance. Accurate assessment of operator functional state is crucial to the successful implementation of an adaptive aiding system. One method of determining operators' functional state is by monitoring their physiology. In the present study, artificial neural networks using physiological signals were used to continuously monitor, in real time, the functional state of 7 participants while they performed the Multi-Attribute Task Battery with two levels of task difficulty. Six channels of brain electrical activity and eye, heart and respiration measures were evaluated on line. The accuracy of the classifier was determined to test its utility as an on-line measure of operator state. The mean classification accuracies were 85%, 82%, and 86% for the baseline, low task difficulty, and high task difficulty conditions, respectively. The high levels of accuracy suggest that these procedures can be used to provide accurate estimates of operator functional state that can be used to provide adaptive aiding. The relative contribution of each of the 43 psychophysiological features was also determined. Actual or potential applications of this research include test and evaluation and adaptive aiding implementation.
Döner, Rana Kaya; Sager, Sait; Görtan, Fatma Arzu; Topuz, Özge Vural; Akyel, Reşit; Vatankulu, Betül; Baran, Ahmet; Teksoz, Serkan; Sönmezoglu, Kerim
2016-01-01
This retrospective study aims to assess the cut-off value of thyroglobulin (Tg) levels in nux or metastatic well-differentiated thyroid cancers (DTCs) with normal anti-Tg levels using with fluorodeoxyglucose/positron emission tomography/computed tomography (FDG PET/CT). We reviewed FDG PET/CT images of 104 patients with well DTC (28 men, 76 women) whose: Iodine-131 (131 I) whole-body scanning was negative but had elevated Tg with normal anti-Tg levels. The overall sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of florine-18-FDG PET/CT findings were found to be 95.92%, 87.27%, 87.04%, 96.00%, and 91.35%, respectively. The best Tg cut-off value was found to be 10.4 ng/ml. In the Tg level <10.4 ng/ml group, the sensitivity, specificity, PPV, NPV, and accuracy of FDG PET/CT were found to be 94.1%, 91.30%, 88.8%, 95.4%, and 92.5%, respectively. In the other group, which Tg level ≥10.4 ng/ml, sensitivity, specificity, PPV, NPV, and accuracy of FDG PET/CT exams were found to be 96.8%, 84.3%, 86.1%, 96.4%, and 90.6%, respectively. FDG PET/CT imaging is a valuable imaging method in the evaluation of patients with elevated serum Tg levels and normal anti-Tg levels. Furthermore, it has potential utility in the dedifferentiation of active foci that are present, and in assessing optimal decision making during follow-up.
Nixon, C; Anderson, T; Morris, L; McCavitt, A; McKinley, R; Yeager, D; McDaniel, M
1998-11-01
The intelligibility of female and male speech is equivalent under most ordinary living conditions. However, due to small differences between their acoustic speech signals, called speech spectra, one can be more or less intelligible than the other in certain situations such as high levels of noise. Anecdotal information, supported by some empirical observations, suggests that some of the high intensity noise spectra of military aircraft cockpits may degrade the intelligibility of female speech more than that of male speech. In an applied research study, the intelligibility of female and male speech was measured in several high level aircraft cockpit noise conditions experienced in military aviation. In Part I, (Nixon CW, et al. Aviat Space Environ Med 1998; 69:675-83) female speech intelligibility measured in the spectra and levels of aircraft cockpit noises and with noise-canceling microphones was lower than that of the male speech in all conditions. However, the differences were small and only those at some of the highest noise levels were significant. Although speech intelligibility of both genders was acceptable during normal cruise noises, improvements are required in most of the highest levels of noise created during maximum aircraft operating conditions. These results are discussed in a Part I technical report. This Part II report examines the intelligibility in the same aircraft cockpit noises of vocoded female and male speech and the accuracy with which female and male speech in some of the cockpit noises were understood by automatic speech recognition systems. The intelligibility of vocoded female speech was generally the same as that of vocoded male speech. No significant differences were measured between the recognition accuracy of male and female speech by the automatic speech recognition systems. The intelligibility of female and male speech was equivalent for these conditions.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
Airbreathing hypersonic vehicle design and analysis methods
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.
1996-01-01
The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.
Strambini, L M; Longo, A; Scarano, S; Prescimone, T; Palchetti, I; Minunni, M; Giannessi, D; Barillaro, G
2015-04-15
In this work a novel self-powered microneedle-based transdermal biosensor for pain-free high-accuracy real-time measurement of glycaemia in interstitial fluid (ISF) is reported. The proposed transdermal biosensor makes use of an array of silicon-dioxide hollow microneedles that are about one order of magnitude both smaller (borehole down to 4µm) and more densely-packed (up to 1×10(6)needles/cm(2)) than state-of-the-art microneedles used for biosensing so far. This allows self-powered (i.e. pump-free) uptake of ISF to be carried out with high efficacy and reliability in a few seconds (uptake rate up to 1µl/s) by exploiting capillarity in the microneedles. By coupling the microneedles operating under capillary-action with an enzymatic glucose biosensor integrated on the back-side of the needle-chip, glucose measurements are performed with high accuracy (±20% of the actual glucose level for 96% of measures) and reproducibility (coefficient of variation 8.56%) in real-time (30s) over the range 0-630mg/dl, thus significantly improving microneedle-based biosensor performance with respect to the state-of-the-art. Copyright © 2014 Elsevier B.V. All rights reserved.
Effect of seabed roughness on tidal current turbines
NASA Astrophysics Data System (ADS)
Gupta, Vikrant; Wan, Minping
2017-11-01
Tidal current turbines are shown to have potential to generate clean energy for a negligible environmental impact. These devices, however, operate in high to moderate current regions where the flow is highly turbulent. It has been shown in flume tank experiments at IFREMER in Boulogne-Sur-Mer (France) and NAFL in the University of Minnesota (US) that the level of turbulence and boundary layer profile affect a turbine's power output and wake characteristics. A major factor that determines these marine flow characteristics is the seabed roughness. Experiments, however, cannot simulate the high Reynolds number conditions of real marine flows. For that, we rely on numerical simulations. High accuracy numerical methods, such as DNS, of wall-bounded flows are very expensive, where the number of grid-points needed to resolve the flow varies as (Re) 9 / 4 (where Re is the flow Reynolds number). While numerically affordable RANS methods compromise on accuracy. Wall-modelled LES methods, which provide both accuracy and affordability, have been improved tremendously in the recent years. We discuss the application of such numerical methods for studying the effect of seabed roughness on marine flow features and their impact on turbine power output and wake characteristics. NSFC, Project Number 11672123.
High accuracy position method based on computer vision and error analysis
NASA Astrophysics Data System (ADS)
Chen, Shihao; Shi, Zhongke
2003-09-01
The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.
The role of feedback contingency in perceptual category learning.
Ashby, F Gregory; Vucovich, Lauren E
2016-11-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from 2 or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all 4 conditions, and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects, as well as their theoretical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Role of Feedback Contingency in Perceptual Category Learning
Ashby, F. Gregory; Vucovich, Lauren E.
2016-01-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393
Borderline features are associated with inaccurate trait self-estimations.
Morey, Leslie C
2014-01-01
Many treatments for Borderline Personality Disorder (BPD) are based upon the hypothesis that gross distortion in perceptions and attributions related to self and others represent a core mechanism for the enduring difficulties displayed by such patients. However, available experimental evidence of such distortions provides equivocal results, with some studies suggesting that BPD is related to inaccuracy in such perceptions and others indicative of enhanced accuracy in some judgments. The current study uses a novel methodology to explore whether individuals with BPD features are less accurate in estimating their levels of universal personality characteristics as compared to community norms. One hundred and four students received course instruction on the Five Factor Model of personality, and then were asked to estimate their levels of these five traits relative to community norms. They then completed the NEO-Five Factor Inventory and the Personality Assessment Inventory-Borderline Features scale (PAI-BOR). Accuracy of estimates was calculated by computing squared differences between self-estimated trait levels and norm-referenced standardized scores in the NEO-FFI. There was a moderately strong relationship between PAI-BOR score and inaccuracy of trait level estimates. In particular, high BOR individuals dramatically overestimated their levels of Agreeableness and Conscientiousness, estimating themselves to be slightly above average on each of these characteristics but actually scoring well below average on both. The accuracy of estimates of levels of Neuroticism were unrelated to BOR scores, despite the fact that BOR scores were highly correlated with Neuroticism. These findings support the hypothesis that a key feature of BPD involves marked perceptual distortions of various aspects of self in relationship to others. However, the results also indicate that this is not a global perceptual deficit, as high BOR scorers accurately estimated that their emotional responsiveness was well above average. However, such individuals appear to have limited insight into their relative disadvantages in the capacity for cooperative relationships, or their limited ability to approach life in a planful and non-impulsive manner. Such results suggest important targets for treatments addressing problems in self-other representations.
ERIC Educational Resources Information Center
Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.
2014-01-01
Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…
Carnley, Mark V.; Fulford, Janice M.; Brooks, Myron H.
2013-01-01
The Level TROLL 100 manufactured by In-Situ Inc. was evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) for conformance to the manufacturer’s accuracy specifications for measuring pressure throughout the device’s operating temperature range. The Level TROLL 100 is a submersible, sealed, water-level sensing device with an operating pressure range equivalent to 0 to 30 feet of water over a temperature range of −20 to 50 degrees Celsius (°C). The device met the manufacturer’s stated accuracy specifications for pressure within its temperature-compensated operating range of 0 to 50 °C. The device’s accuracy specifications did not meet established USGS requirements for primary water-stage sensors used in the operation of streamgages, but the Level TROLL 100 may be suitable for other hydrologic data-collection applications. As a note, the Level TROLL 100 is not designed to meet USGS accuracy requirements. Manufacturer accuracy specifications were evaluated, and the procedures followed and the results obtained are described in this report. USGS accuracy requirements are routinely examined and reported when instruments are evaluated at the HIF.
A novel redundant INS based on triple rotary inertial measurement units
NASA Astrophysics Data System (ADS)
Chen, Gang; Li, Kui; Wang, Wei; Li, Peng
2016-10-01
Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h-1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h-1, which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required.
Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients.
Iyengar, S S; Morgan-Hughes, G; Ukoumunne, O; Clayton, B; Davies, E J; Nikolaou, V; Hyde, C J; Shore, A C; Roobottom, C A
2016-02-01
To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥ 50%) stenosis and secondarily as the presence of severe (≥ 70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Storino, Alessandra; Castillo-Angeles, Manuel; Watkins, Ammara A; Vargas, Christina; Mancias, Joseph D; Bullock, Andrea; Demirjian, Aram; Moser, A James; Kent, Tara S
2016-09-01
The degree to which patients are empowered by written educational materials depends on the text's readability level and the accuracy of the information provided. The association of a website's affiliation or focus on treatment modality with its readability and accuracy has yet to be thoroughly elucidated. To compare the readability and accuracy of patient-oriented online resources for pancreatic cancer by treatment modality and website affiliation. An online search of 50 websites discussing 5 pancreatic cancer treatment modalities (alternative therapy, chemotherapy, clinical trials, radiation therapy, and surgery) was conducted. The website's affiliation was identified. Readability was measured by 9 standardized tests, and accuracy was assessed by an expert panel. Nine standardized tests were used to compute the median readability level of each website. The median readability scores were compared among treatment modality and affiliation categories. Accuracy was determined by an expert panel consisting of 2 medical specialists and 2 surgical specialists. The 4 raters independently evaluated all websites belonging to the 5 treatment modalities (a score of 1 indicates that <25% of the information is accurate, a score of 2 indicates that 26%-50% of the information is accurate, a score of 3 indicates that 51%-75% of the information is accurate, a score of 4 indicates that 76%-99% of the information is accurate, and a score of 5 indicates that 100% of the information is accurate). The 50 evaluated websites differed in readability and accuracy based on the focus of the treatment modality and the website's affiliation. Websites discussing surgery (with a median readability level of 13.7 and an interquartile range [IQR] of 11.9-15.6) were easier to read than those discussing radiotherapy (median readability level, 15.2 [IQR, 13.0-17.0]) (P = .003) and clinical trials (median readability level, 15.2 [IQR, 12.8-17.0]) (P = .002). Websites of nonprofit organizations (median readability level, 12.9 [IQR, 11.2-15.0]) were easier to read than media (median readability level, 16.0 [IQR, 13.4-17.0]) (P < .001) and academic (median readability level, 14.8 [IQR, 12.9-17.0]) (P < .001) websites. Privately owned websites (median readability level, 14.0 [IQR, 12.1-16.1]) were easier to read than media websites (P = .001). Among treatment modalities, alternative therapy websites exhibited the lowest accuracy scores (median accuracy score, 2 [IQR, 1-4]) (P < .001). Nonprofit (median accuracy score, 4 [IQR, 4-5]), government (median accuracy score, 5 [IQR, 4-5]), and academic (median accuracy score, 4 [IQR, 3.5-5]) websites were more accurate than privately owned (median accuracy score, 3.5 [IQR, 1.5-4]) and media (median accuracy score, 4 [IQR, 2-4]) websites (P < .004). Websites with higher accuracy were more difficult to read than websites with lower accuracy. Online information on pancreatic cancer overestimates the reading ability of the overall population and lacks accurate information about alternative therapy. In the absence of quality control on the Internet, physicians should provide guidance to patients in the selection of online resources with readable and accurate information.
Low Thrust Orbital Maneuvers Using Ion Propulsion
NASA Astrophysics Data System (ADS)
Ramesh, Eric
2011-10-01
Low-thrust maneuver options, such as electric propulsion, offer specific challenges within mission-level Modeling, Simulation, and Analysis (MS&A) tools. This project seeks to transition techniques for simulating low-thrust maneuvers from detailed engineering level simulations such as AGI's Satellite ToolKit (STK) Astrogator to mission level simulations such as the System Effectiveness Analysis Simulation (SEAS). Our project goals are as follows: A) Assess different low-thrust options to achieve various orbital changes; B) Compare such approaches to more conventional, high-thrust profiles; C) Compare computational cost and accuracy of various approaches to calculate and simulate low-thrust maneuvers; D) Recommend methods for implementing low-thrust maneuvers in high-level mission simulations; E) prototype recommended solutions.
Görgens, Christian; Guddat, Sven; Orlovius, Anne-Katrin; Sigmund, Gerd; Thomas, Andreas; Thevis, Mario; Schänzer, Wilhelm
2015-07-01
In the field of LC-MS, reversed phase liquid chromatography is the predominant method of choice for the separation of prohibited substances from various classes in sports drug testing. However, highly polar and charged compounds still represent a challenging task in liquid chromatography due to their difficult chromatographic behavior using reversed phase materials. A very promising approach for the separation of hydrophilic compounds is hydrophilic interaction liquid chromatography (HILIC). Despite its great potential and versatile advantages for the separation of highly polar compounds, HILIC is up to now not very common in doping analysis, although most manufacturers offer a variety of HILIC columns in their portfolio. In this study, a novel multi-target approach based on HILIC high resolution/high accuracy mass spectrometry is presented to screen for various polar stimulants, stimulant sulfo-conjugates, glycerol, AICAR, ethyl glucuronide, morphine-3-glucuronide, and myo-inositol trispyrophosphate after direct injection of diluted urine specimens. The usage of an effective online sample cleanup and a zwitterionic HILIC analytical column in combination with a new generation Hybrid Quadrupol-Orbitrap® mass spectrometer enabled the detection of highly polar analytes without any time-consuming hydrolysis or further purification steps, far below the required detection limits. The methodology was fully validated for qualitative and quantitative (AICAR, glycerol) purposes considering the parameters specificity; robustness (rRT < 2.0%); linearity (R > 0.99); intra- and inter-day precision at low, medium, and high concentration levels (CV < 20%); limit of detection (stimulants and stimulant sulfo-conjugates < 10 ng/mL; norfenefrine; octopamine < 30 ng/mL; AICAR < 10 ng/mL; glycerol 100 μg/mL; ETG < 100 ng/mL); accuracy (AICAR 103.8-105.5%, glycerol 85.1-98.3% at three concentration levels) and ion suppression/enhancement effects.
Multi-sensor information fusion method for vibration fault diagnosis of rolling bearing
NASA Astrophysics Data System (ADS)
Jiao, Jing; Yue, Jianhai; Pei, Di
2017-10-01
Bearing is a key element in high-speed electric multiple unit (EMU) and any defect of it can cause huge malfunctioning of EMU under high operation speed. This paper presents a new method for bearing fault diagnosis based on least square support vector machine (LS-SVM) in feature-level fusion and Dempster-Shafer (D-S) evidence theory in decision-level fusion which were used to solve the problems about low detection accuracy, difficulty in extracting sensitive characteristics and unstable diagnosis system of single-sensor in rolling bearing fault diagnosis. Wavelet de-nosing technique was used for removing the signal noises. LS-SVM was used to make pattern recognition of the bearing vibration signal, and then fusion process was made according to the D-S evidence theory, so as to realize recognition of bearing fault. The results indicated that the data fusion method improved the performance of the intelligent approach in rolling bearing fault detection significantly. Moreover, the results showed that this method can efficiently improve the accuracy of fault diagnosis.
NASA Astrophysics Data System (ADS)
Wardlow, Brian Douglas
The objectives of this research were to: (1) investigate time-series MODIS (Moderate Resolution Imaging Spectroradiometer) 250-meter EVI (Enhanced Vegetation Index) and NDVI (Normalized Difference Vegetation Index) data for regional-scale crop-related land use/land cover characterization in the U.S. Central Great Plains and (2) develop and test a MODIS-based crop mapping protocol. A pixel-level analysis of the time-series MODIS 250-m VIs for 2,000+ field sites across Kansas found that unique spectral-temporal signatures were detected for the region's major crop types, consistent with the crops' phenology. Intra-class variations were detected in VI data associated with different land use practices, climatic conditions, and planting dates for the crops. The VIs depicted similar seasonal variations and were highly correlated. A pilot study in southwest Kansas found that accurate and detailed cropping patterns could be mapped using the MODIS 250-m VI data. Overall and class-specific accuracies were generally greater than 90% for mapping crop/non-crop, general crops (alfalfa, summer crops, winter wheat, and fallow), summer crops (corn, sorghum, and soybeans), and irrigated/non-irrigated crops using either VI dataset. The classified crop areas also had a high level of agreement (<5% difference) with the USDA reported crop areas. Both VIs produced comparable crop maps with only a 1-2% difference between their classification accuracies and a high level of pixel-level agreement (>90%) between their classified crop patterns. This hierarchical crop mapping protocol was tested for Kansas and produced similar classification results over a larger and more diverse area. Overall and class-specific accuracies were typically between 85% and 95% for the crop maps. At the state level, the maps had a high level of areal agreement (<5% difference) with the USDA crop area figures and their classified patterns were consistent with the state's cropping practices. In general, the protocol's performance was relatively consistent across the state's range of environmental conditions, landscape patterns, and cropping practices. The largest areal differences occurred in eastern Kansas due to the omission of many small cropland areas that were not resolvable at MODIS' 250-m resolution. Notable regional deviations in classified areas also occurred for selected classes due to localized precipitation patterns and specific cropping practices.
NASA Astrophysics Data System (ADS)
Takenaka, Y.; Katoh, M.; Deng, S.; Cheung, K.
2017-10-01
Pine wilt disease is caused by the pine wood nematode (Bursaphelenchus xylophilus) and Japanese pine sawyer (Monochamus alternatus). This study attempted to detect damaged pine trees at different levels using a combination of airborne laser scanning (ALS) data and high-resolution space-borne images. A canopy height model with a resolution of 50 cm derived from the ALS data was used for the delineation of tree crowns using the Individual Tree Detection method. Two pan-sharpened images were established using the ortho-rectified images. Next, we analyzed two kinds of intensity-hue-saturation (IHS) images and 18 remote sensing indices (RSI) derived from the pan-sharpened images. The mean and standard deviation of the 2 IHS images, 18 RSI, and 8 bands of the WV-2 and WV-3 images were extracted for each tree crown and were used to classify tree crowns using a support vector machine classifier. Individual tree crowns were assigned to one of nine classes: bare ground, Larix kaempferi, Cryptomeria japonica, Chamaecyparis obtusa, broadleaved trees, healthy pines, and damaged pines at slight, moderate, and heavy levels. The accuracy of the classifications using the WV-2 images ranged from 76.5 to 99.6 %, with an overall accuracy of 98.5 %. However, the accuracy of the classifications using the WV-3 images ranged from 40.4 to 95.4 %, with an overall accuracy of 72 %, which suggests poorer accuracy compared to those classes derived from the WV-2 images. This is because the WV-3 images were acquired in October 2016 from an area with low sun, at a low altitude.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009
NASA Astrophysics Data System (ADS)
Boudria, Yacine; Feltane, Amal; Besio, Walter
2014-06-01
Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.
NASA Astrophysics Data System (ADS)
Shimura, Kazuo; Nakajima, Nobuyoshi; Tanaka, Hiroshi; Ishida, Masamitsu; Kato, Hisatoyo
1993-09-01
Dual-energy X-ray absorptiometry (DXA) is one of the bone densitometry techniques to diagnose osteoporosis, and has been gradually getting popular due to its high degree of precision. However, DXA involves a time-consuming examination because of its pencil-beam scan, and the equipment is expensive. In this study, we examined a new bone densitometry technique (CR-DXA) utilizing an X-ray imaging system and Computed Radiography (CR) used for medical X-ray image diagnosis. High level of measurement precision and accuracy could be achieved by X-ray rube voltage/filter optimization and various nonuniformity corrections based on simulation and experiment. The phantom study using a bone mineral block showed precision of 0.83% c.v. (coefficient of variation), and accuracy of 0.01 g/cm2, suggesting that a practically equivalent degree of measurement precision and accuracy to that of the DXA approach is achieved. CR-DXA is considered to provide bone mineral densitometry to facilitate simple, quick and precise bone mineral density measurement.
Rethinking Indoor Localization Solutions Towards the Future of Mobile Location-Based Services
NASA Astrophysics Data System (ADS)
Guney, C.
2017-11-01
Satellite navigation systems with GNSS-enabled devices, such as smartphones, car navigation systems, have changed the way users travel in outdoor environment. GNSS is generally not well suited for indoor location and navigation because of two reasons: First, GNSS does not provide a high level of accuracy although indoor applications need higher accuracies. Secondly, poor coverage of satellite signals for indoor environments decreases its accuracy. So rather than using GNSS satellites within closed environments, existing indoor navigation solutions rely heavily on installed sensor networks. There is a high demand for accurate positioning in wireless networks in GNSS-denied environments. However, current wireless indoor positioning systems cannot satisfy the challenging needs of indoor location-aware applications. Nevertheless, access to a user's location indoors is increasingly important in the development of context-aware applications that increases business efficiency. In this study, how can the current wireless location sensing systems be tailored and integrated for specific applications, like smart cities/grids/buildings/cars and IoT applications, in GNSS-deprived areas.
Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images
NASA Astrophysics Data System (ADS)
Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung
2010-06-01
Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.
Classifying visuomotor workload in a driving simulator using subject specific spatial brain patterns
Dijksterhuis, Chris; de Waard, Dick; Brookhuis, Karel A.; Mulder, Ben L. J. M.; de Jong, Ritske
2013-01-01
A passive Brain Computer Interface (BCI) is a system that responds to the spontaneously produced brain activity of its user and could be used to develop interactive task support. A human-machine system that could benefit from brain-based task support is the driver-car interaction system. To investigate the feasibility of such a system to detect changes in visuomotor workload, 34 drivers were exposed to several levels of driving demand in a driving simulator. Driving demand was manipulated by varying driving speed and by asking the drivers to comply to individually set lane keeping performance targets. Differences in the individual driver's workload levels were classified by applying the Common Spatial Pattern (CSP) and Fisher's linear discriminant analysis to frequency filtered electroencephalogram (EEG) data during an off line classification study. Several frequency ranges, EEG cap configurations, and condition pairs were explored. It was found that classifications were most accurate when based on high frequencies, larger electrode sets, and the frontal electrodes. Depending on these factors, classification accuracies across participants reached about 95% on average. The association between high accuracies and high frequencies suggests that part of the underlying information did not originate directly from neuronal activity. Nonetheless, average classification accuracies up to 75–80% were obtained from the lower EEG ranges that are likely to reflect neuronal activity. For a system designer, this implies that a passive BCI system may use several frequency ranges for workload classifications. PMID:23970851
Systematic review of discharge coding accuracy
Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.
2012-01-01
Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Jensen, Pamela K; Wujcik, Chad E; McGuire, Michelle K; McGuire, Mark A
2016-01-01
Simple high-throughput procedures were developed for the direct analysis of glyphosate [N-(phosphonomethyl)glycine] and aminomethylphosphonic acid (AMPA) in human and bovine milk and human urine matrices. Samples were extracted with an acidified aqueous solution on a high-speed shaker. Stable isotope labeled internal standards were added with the extraction solvent to ensure accurate tracking and quantitation. An additional cleanup procedure using partitioning with methylene chloride was required for milk matrices to minimize the presence of matrix components that can impact the longevity of the analytical column. Both analytes were analyzed directly, without derivatization, by liquid chromatography tandem mass spectrometry using two separate precursor-to-product transitions that ensure and confirm the accuracy of the measured results. Method performance was evaluated during validation through a series of assessments that included linearity, accuracy, precision, selectivity, ionization effects and carryover. Limits of quantitation (LOQ) were determined to be 0.1 and 10 µg/L (ppb) for urine and milk, respectively, for both glyphosate and AMPA. Mean recoveries for all matrices were within 89-107% at three separate fortification levels including the LOQ. Precision for replicates was ≤ 7.4% relative standard deviation (RSD) for milk and ≤ 11.4% RSD for urine across all fortification levels. All human and bovine milk samples used for selectivity and ionization effects assessments were free of any detectable levels of glyphosate and AMPA. Some of the human urine samples contained trace levels of glyphosate and AMPA, which were background subtracted for accuracy assessments. Ionization effects testing showed no significant biases from the matrix. A successful independent external validation was conducted using the more complicated milk matrices to demonstrate method transferability.
Chen, Baisheng; Wu, Huanan; Li, Sam Fong Yau
2014-03-01
To overcome the challenging task to select an appropriate pathlength for wastewater chemical oxygen demand (COD) monitoring with high accuracy by UV-vis spectroscopy in wastewater treatment process, a variable pathlength approach combined with partial-least squares regression (PLSR) was developed in this study. Two new strategies were proposed to extract relevant information of UV-vis spectral data from variable pathlength measurements. The first strategy was by data fusion with two data fusion levels: low-level data fusion (LLDF) and mid-level data fusion (MLDF). Predictive accuracy was found to improve, indicated by the lower root-mean-square errors of prediction (RMSEP) compared with those obtained for single pathlength measurements. Both fusion levels were found to deliver very robust PLSR models with residual predictive deviations (RPD) greater than 3 (i.e. 3.22 and 3.29, respectively). The second strategy involved calculating the slopes of absorbance against pathlength at each wavelength to generate slope-derived spectra. Without the requirement to select the optimal pathlength, the predictive accuracy (RMSEP) was improved by 20-43% as compared to single pathlength spectroscopy. Comparing to nine-factor models from fusion strategy, the PLSR model from slope-derived spectroscopy was found to be more parsimonious with only five factors and more robust with residual predictive deviation (RPD) of 3.72. It also offered excellent correlation of predicted and measured COD values with R(2) of 0.936. In sum, variable pathlength spectroscopy with the two proposed data analysis strategies proved to be successful in enhancing prediction performance of COD in wastewater and showed high potential to be applied in on-line water quality monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Schizophrenia classification using functional network features
NASA Astrophysics Data System (ADS)
Rish, Irina; Cecchi, Guillermo A.; Heuton, Kyle
2012-03-01
This paper focuses on discovering statistical biomarkers (features) that are predictive of schizophrenia, with a particular focus on topological properties of fMRI functional networks. We consider several network properties, such as node (voxel) strength, clustering coefficients, local efficiency, as well as just a subset of pairwise correlations. While all types of features demonstrate highly significant statistical differences in several brain areas, and close to 80% classification accuracy, the most remarkable results of 93% accuracy are achieved by using a small subset of only a dozen of most-informative (lowest p-value) correlation features. Our results suggest that voxel-level correlations and functional network features derived from them are highly informative about schizophrenia and can be used as statistical biomarkers for the disease.
Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model
NASA Astrophysics Data System (ADS)
Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.
2013-12-01
This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.
Armon-Lotem, Sharon; Meir, Natalia
2016-11-01
Previous research demonstrates that repetition tasks are valuable tools for diagnosing specific language impairment (SLI) in monolingual children in English and a variety of other languages, with non-word repetition (NWR) and sentence repetition (SRep) yielding high levels of sensitivity and specificity. Yet, only a few studies have addressed the diagnostic accuracy of repetition tasks in bilingual children, and most available research focuses on English-Spanish sequential bilinguals. To evaluate the efficacy of three repetition tasks (forward digit span (FWD), NWR and SRep) in order to distinguish mono- and bilingual children with and without SLI in Russian and Hebrew. A total of 230 mono- and bilingual children aged 5;5-6;8 participated in the study: 144 bilingual Russian-Hebrew-speaking children (27 with SLI); and 52 monolingual Hebrew-speaking children (14 with SLI) and 34 monolingual Russian-speaking children (14 with SLI). Parallel repetition tasks were designed in both Russian and Hebrew. Bilingual children were tested in both languages. The findings confirmed that NWR and SRep are valuable tools in distinguishing monolingual children with and without SLI in Russian and Hebrew, while the results for FWD were mixed. Yet, testing of bilingual children with the same tools using monolingual cut-off points resulted in inadequate diagnostic accuracy. We demonstrate, however, that the use of bilingual cut-off points yielded acceptable levels of diagnostic accuracy. The combination of SRep tasks in L1/Russian and L2/Hebrew yielded the highest overall accuracy (i.e., 94%), but even SRep alone in L2/Hebrew showed excellent levels of sensitivity (i.e., 100%) and specificity (i.e., 89%), reaching 91% of total diagnostic accuracy. The results are very promising for identifying SLI in bilingual children and for showing that testing in the majority language with bilingual cut-off points can provide an accurate classification. © 2016 Royal College of Speech and Language Therapists.
Jetha, Arif; Faulkner, Guy; Gorczynski, Paul; Arbour-Nicitopoulos, Kelly; Martin Ginis, Kathleen A
2011-04-01
A number of websites on the Internet promote health-enhancing behaviors among people with spinal cord injury (SCI). However, the information available is of unknown accuracy and quality. To examine the accuracy, quality, and targeting strategies used in online physical activity (PA) information aimed at people with SCI. A purposive sample of 30 frequently accessed websites for individuals with SCI that included PA information was examined. Websites were evaluated based on their descriptive characteristics, level of accuracy in relation to newly defined PA recommendations for people with SCI, technical and theoretical quality (i.e., use of behavioral theories) characteristics, and targeting strategies to promote PA among people with SCI. Descriptive statistics were utilized to illustrate the results of the evaluation. PA information was easily accessible, as rated by the number of clicks required to access information. Only 6 websites (20%) provided specific PA recommendations and these websites exhibited low accuracy. Technically, websites were of high quality with a mean score of 4.1 of a possible 6 points. In contrast, websites had a low level of theoretical quality, with 23 of the 30 websites (77%) scoring below 9 of a possible 14 points (i.e., 64% of a perfect score) for theoretical content. A majority of websites evaluated did not use cognitive (e.g., self-efficacy, self-talk, and perceived social norms) and behavioral (e.g., self-monitoring, motivational readiness, and realistic goal-setting) strategies in their messages. A majority (80%) of the evaluated websites customized information for persons with different injury levels and completeness. Less than half of the websites evaluated tailored PA information toward people at different stages of their injury rehabilitation (37%) or for their caregivers (30%). Accuracy and theoretical quality of PA information presented to people with SCI on the Internet may not be optimal. Websites should be improved to incorporate accepted PA recommendations and behavioral theory to better deliver health messages about PA. Copyright © 2011 Elsevier Inc. All rights reserved.
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
NASA Astrophysics Data System (ADS)
Carter, W. E.; Robertson, D. S.; Nothnagel, A.; Nicolson, G. D.; Schuh, H.
1988-12-01
High-accuracy geodetic very long baseline interferometry measurements between the African, Eurasian, and North American plates have been analyzed to determine the terrestrial coordinates of the Hartebeesthoek observatory to better than 10 cm, to determine the celestial coordinates of eight Southern Hemisphere radio sources with milliarc second (mas) accuracy, and to derive quasi-independent polar motion, UTI, and nutation time series. Comparison of the earth orientation time series with ongoing International Radio Interferometric Surveying project values shows agreement at about the 1 mas of arc level in polar motion and nutation and 0.1 ms of time in UTI. Given the independence of the observing sessions and the unlikeliness of common systematic error sources, this level of agreement serves to bound the total errors in both measurement series.
Moeller, Sara K; Lee, Elizabeth A Ewing; Robinson, Michael D
2011-08-01
Dominance and submission constitute fundamentally different social interaction strategies that may be enacted most effectively to the extent that the emotions of others are relatively ignored (dominance) versus noticed (submission). On the basis of such considerations, we hypothesized a systematic relationship between chronic tendencies toward high versus low levels of interpersonal dominance and emotion decoding accuracy in objective tasks. In two studies (total N = 232), interpersonally dominant individuals exhibited poorer levels of emotion recognition in response to audio and video clips (Study 1) and facial expressions of emotion (Study 2). The results provide a novel perspective on interpersonal dominance, suggest its strategic nature (Study 2), and are discussed in relation to Fiske's (1993) social-cognitive theory of power. 2011 APA, all rights reserved
Komori, Mie
2016-01-01
Monitoring is an executive function of working memory that serves to update novel information, focusing attention on task-relevant targets, and eliminating task-irrelevant noise. The present research used a verbal working memory task to examine how working memory capacity limits affect monitoring. Participants performed a Japanese listening span test that included maintenance of target words and listening comprehension. On each trial, participants responded to the target word and then immediately estimated confidence in recall performance for that word (metacognitive judgment). The results confirmed significant differences in monitoring accuracy between high and low capacity groups in a multi-task situation. That is, confidence judgments were superior in high vs. low capacity participants in terms of absolute accuracy and discrimination. The present research further investigated how memory load and interference affect underestimation of successful recall. The results indicated that the level of memory load that reduced word recall performance and led to an underconfidence bias varied according to participants' memory capacity. In addition, irrelevant information associated with incorrect true/ false decisions (secondary task) and word recall within the current trial impaired monitoring accuracy in both participant groups. These findings suggest that interference from unsuccessful decisions only influences low, but not high, capacity participants. Therefore, monitoring accuracy, which requires high working memory capacity, improves metacognitive abilities by inhibiting task-irrelevant noise and focusing attention on detecting task-relevant targets or useful retrieval cues, which could improve actual cognitive performance.
Fast-PPP assessment in European and equatorial region near the solar cycle maximum
NASA Astrophysics Data System (ADS)
Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume
2014-05-01
The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with the carrier-phase ambiguity fixing performed in the Fast-PPP CPF. The Fast-PPP user domain positioning takes benefit of such precise ionospheric modelling. Convergence time of dual-frequency classic PPP solutions is reduced from the best part of an hour to 5-10 minutes not only in European mid-latitudes but also in the much more challenging equatorial region. The improvement of ionospheric modelling is directly translated into the accuracy of single-frequency mass-market users, achieving 2-3 decimetres of error after any cold start. Since all Fast-PPP corrections are broadcast together with their confidence level (sigma), such high-accuracy navigation is protected with safety integrity bounds.
Although remote sensing technology has long been used in wetland inventory and monitoring, the accuracy and detail level of derived wetland maps were limited or often unsatisfactory largely due to the relatively coarse spatial resolution of conventional satellite imagery. This re...
Computer-Aided Design and Optimization of High-Performance Vacuum Electronic Devices
2006-08-15
approximations to the metric, and space mapping wherein low-accuracy (coarse mesh) solutions can potentially be used more effectively in an...interface and algorithm development. • Work on space - mapping or related methods for utilizing models of varying levels of approximation within an
NASA Astrophysics Data System (ADS)
Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.
2018-04-01
The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.
Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo
2017-01-01
Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024 pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Shimamune, Satoru; Jitsumori, Masako
1999-01-01
In a computer-assisted sentence completion task, the effects of grammar instruction and fluency training on learning the use of the definite and indefinite articles of English were examined. Forty-eight native Japanese-speaking students were assigned to four groups: with grammar/accuracy (G/A), without grammar/accuracy (N/A), with grammar/fluency (G/F), and without grammar/fluency (N/F). In the G/A and N/A groups, training continued until performance reached 100% accuracy (accuracy criterion). In the G/F and N/F groups, training continued until 100% accuracy was reached and the correct responses were made at a high speed (fluency criterion). Grammar instruction was given to participants in the G/A and G/F groups but not to those in the N/A and N/F groups. Generalization to new sentences was tested immediately after reaching the required criterion. High levels of generalization occurred, regardless of the type of mastery criterion and whether the grammar instruction was given. Retention tests were conducted 4, 6, and 8 weeks after training. Fluency training effectively improved retention of the performance attained without the grammar instruction. This effect was diminished when grammar instruction was given during training. Learning grammatical rules was not necessary for the generalized use of appropriate definite and indefinite articles or for the maintenance of the performance attained through fluency training. PMID:22477154
NASA Astrophysics Data System (ADS)
Godah, Walyeldeen; Krynski, Jan; Szelachowska, Malgorzata
2018-05-01
The objective of this paper is to demonstrate the usefulness of absolute gravity data for the validation of Global Geopotential Models (GGMs). It is also aimed at improving quasigeoid heights determined from satellite-only GGMs using absolute gravity data. The area of Poland, as a unique one, covered with a homogeneously distributed set of absolute gravity data, has been selected as a study area. The gravity anomalies obtained from GGMs were validated using the corresponding ones determined from absolute gravity data. The spectral enhancement method was implemented to overcome the spectral inconsistency in data being validated. The quasigeoid heights obtained from the satellite-only GGM as well as from the satellite-only GGM in combination with absolute gravity data were evaluated with high accuracy GNSS/levelling data. Estimated accuracy of gravity anomalies obtained from GGMs investigated is of 1.7 mGal. Considering omitted gravity signal, e.g. from degree and order 101 to 2190, satellite-only GGMs can be validated at the accuracy level of 1 mGal using absolute gravity data. An improvement up to 59% in the accuracy of quasigeoid heights obtained from the satellite-only GGM can be observed when combining the satellite-only GGM with absolute gravity data.
A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG
Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng
2017-01-01
In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
NASA Astrophysics Data System (ADS)
Buchholz, Bernhard; Ebert, Volker
2018-01-01
Highly accurate water vapor measurements are indispensable for understanding a variety of scientific questions as well as industrial processes. While in metrology water vapor concentrations can be defined, generated, and measured with relative uncertainties in the single percentage range, field-deployable airborne instruments deviate even under quasistatic laboratory conditions up to 10-20 %. The novel SEALDH-II hygrometer, a calibration-free, tuneable diode laser spectrometer, bridges this gap by implementing a new holistic concept to achieve higher accuracy levels in the field. We present in this paper the absolute validation of SEALDH-II at a traceable humidity generator during 23 days of permanent operation at 15 different H2O mole fraction levels between 5 and 1200 ppmv. At each mole fraction level, we studied the pressure dependence at six different gas pressures between 65 and 950 hPa. Further, we describe the setup for this metrological validation, the challenges to overcome when assessing water vapor measurements on a high accuracy level, and the comparison results. With this validation, SEALDH-II is the first airborne, metrologically validated humidity transfer standard which links several scientific airborne and laboratory measurement campaigns to the international metrological water vapor scale.
Gerrard, Paul
2012-10-01
To determine whether there is a relationship between the level of education and the accuracy of self-reported physical activity as a proxy measure of aerobic fitness. Data from the National Health and Nutrition Examination from the years 1999 to 2004 were used. Linear regression was performed for measured maximum oxygen consumption (Vo(2)max) versus self-reported physical activity for 5 different levels of education. This was a national survey in the United States. Participants included adults from the general U.S. population (N=3290). None. Coefficients of determination obtained from models for each education level were used to compare how well self-reported physical activity represents cardiovascular fitness. These coefficients were the main outcome measure. Coefficients of determination for Vo(2)max versus reported physical activity increased as the level of education increased. In this preliminary study, self-reported physical activity is a better proxy measure for aerobic fitness in highly educated individuals than in poorly educated individuals. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
Thordardottir, Elin
2016-01-01
Grammatical morphology continues to be widely regarded as an area of extraordinary difficulty in children with Specific Language Impairment (SLI). A main argument for this view is the purported high diagnostic accuracy of morphological errors for the identification of SLI. However, findings are inconsistent across age groups and across languages. Studies show morphological difficulty to be far less pronounced in more highly inflected languages and the diagnostic accuracy of morphology in such languages is largely unknown. This study examines the morphological use of Icelandic children with and without SLI in a cross-sectional sample of children ranging from preschool age to adolescence and assesses the usefulness of morphology as a clinical marker to identify SLI. Participants were 57 monolingual Icelandic-speaking children age 4-14 years; 31 with SLI and 26 with typical language development (TD). Spontaneous language samples were coded for correct and incorrect use of grammatical morphology. The diversity of use of grammatical morphemes was documented for each group at different age and MLU levels. Individual accuracy scores were plotted against age as well as MLU and diagnostic accuracy was calculated. MLU and morphological accuracy increased with age for both children with SLI and TD, with the two groups gradually approaching each other. Morphological diversity and sequence of acquisition was similar across TD and SLI groups compared based on age or MLU. Morphological accuracy was overall high, but was somewhat lower in the SLI group, in particular at ages below 12 years and MLU levels below 6.0. However, overlap between the groups was important in all age groups, involving a greater tendency for errors in both groups at young ages and scores close to or at ceiling at older ages. Sensitivity rates as well as likelihood ratios for each morpheme were all below the range considered acceptable for clinical application, whereas better specificity rates in some age groups for some morphemes indicated that very low scores are indicative of SLI whereas high scores are uninformative. Age effects were evident in that the morphemes varied in the age at which they separate the groups most accurately. The findings of this study show that Icelandic children with SLI are somewhat more prone to making morphological errors than their TD counterparts. However, great overlap exists between the groups. The findings call into question the view that grammatical morphology is a central area of deficit in SLI. Copyright © 2016 Elsevier Inc. All rights reserved.
Determination of GPS orbits to submeter accuracy
NASA Technical Reports Server (NTRS)
Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.
1988-01-01
Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.
Accuracy of forecasts in strategic intelligence
Mandel, David R.; Barnes, Alan
2014-01-01
The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176
Zavala, Mary Wassel; Yule, Arthur; Kwan, Lorna; Lambrechts, Sylvia; Maliski, Sally L; Litwin, Mark S
2016-11-01
To examine accuracy of patient-reported prostate-specific antigen (PSA) levels among indigent, uninsured men in a state-funded prostate cancer treatment program that provides case management, care coordination, and health education. Program evaluation. About 114 men with matched self- and lab-reported PSA levels at program enrollment and another time point within 18 months. Abstraction of self- and lab-reported PSA levels to determine self-report as "accurate" or "inaccurate," and evaluate accuracy change over time, before and after nursing interventions. Chi-square tests compared patients with accurate versus inaccurate PSA values. Nonlinear multivariate analyses explored trends in self-reported accuracy over time. Program enrollees receive prostate cancer education from a Nurse Case Manager (NCM), including significance of PSA levels. Men self-report PSA results to their NCM following lab draws and appointments. The NCM provides ongoing education about PSA levels. Of the sample, 46% (n = 53) accurately reported PSA levels. Accuracy of PSA self-reports improved with increasing time since program enrollment. Compared with men at public facilities, those treated at private facilities showed increasing accuracy in self-reported PSA (p = .038). A targeted nursing intervention may increase specific knowledge of PSA levels. Additionally, the provider/treatment setting significantly impacts a patient's disease education and knowledge. © 2016 Wiley Periodicals, Inc.
Nowosad, Jakub; Stach, Alfred; Kasprzyk, Idalia; Weryszko-Chmielewska, Elżbieta; Piotrowska-Weryszko, Krystyna; Puc, Małgorzata; Grewling, Łukasz; Pędziszewska, Anna; Uruska, Agnieszka; Myszkowska, Dorota; Chłopek, Kazimiera; Majkowska-Wojciechowska, Barbara
The aim of the study was to create and evaluate models for predicting high levels of daily pollen concentration of Corylus , Alnus , and Betula using a spatiotemporal correlation of pollen count. For each taxon, a high pollen count level was established according to the first allergy symptoms during exposure. The dataset was divided into a training set and a test set, using a stratified random split. For each taxon and city, the model was built using a random forest method. Corylus models performed poorly. However, the study revealed the possibility of predicting with substantial accuracy the occurrence of days with high pollen concentrations of Alnus and Betula using past pollen count data from monitoring sites. These results can be used for building (1) simpler models, which require data only from aerobiological monitoring sites, and (2) combined meteorological and aerobiological models for predicting high levels of pollen concentration.
Analysis of Ultra High Resolution Sea Surface Temperature Level 4 Datasets
NASA Technical Reports Server (NTRS)
Wagner, Grant
2011-01-01
Sea surface temperature (SST) studies are often focused on improving accuracy, or understanding and quantifying uncertainties in the measurement, as SST is a leading indicator of climate change and represents the longest time series of any ocean variable observed from space. Over the past several decades SST has been studied with the use of satellite data. This allows a larger area to be studied with much more frequent measurements being taken than direct measurements collected aboard ship or buoys. The Group for High Resolution Sea Surface Temperature (GHRSST) is an international project that distributes satellite derived sea surface temperatures (SST) data from multiple platforms and sensors. The goal of the project is to distribute these SSTs for operational uses such as ocean model assimilation and decision support applications, as well as support fundamental SST research and climate studies. Examples of near real time applications include hurricane and fisheries studies and numerical weather forecasting. The JPL group has produced a new 1 km daily global Level 4 SST product, the Multiscale Ultrahigh Resolution (MUR), that blends SST data from 3 distinct NASA radiometers: the Moderate Resolution Imaging Spectroradiometer (MODIS), the Advanced Very High Resolution Radiometer (AVHRR), and the Advanced Microwave Scanning Radiometer ? Earth Observing System(AMSRE). This new product requires further validation and accuracy assessment, especially in coastal regions.We examined the accuracy of the new MUR SST product by comparing the high resolution version and a lower resolution version that has been smoothed to 19 km (but still gridded to 1 km). Both versions were compared to the same data set of in situ buoy temperature measurements with a focus on study regions of the oceans surrounding North and Central America as well as two smaller regions around the Gulf Stream and California coast. Ocean fronts exhibit high temperature gradients (Roden, 1976), and thus satellite data of SST can be used in the detection of these fronts. In this case, accuracy is less of a concern because the primary focus is on the spatial derivative of SST. We calculated the gradients for both versions of the MUR data set and did statistical comparisons focusing on the same regions.
Mejia Tobar, Alejandra; Hyoudou, Rikiya; Kita, Kahori; Nakamura, Tatsuhiro; Kambara, Hiroyuki; Ogata, Yousuke; Hanakawa, Takashi; Koike, Yasuharu; Yoshimura, Natsue
2017-01-01
The classification of ankle movements from non-invasive brain recordings can be applied to a brain-computer interface (BCI) to control exoskeletons, prosthesis, and functional electrical stimulators for the benefit of patients with walking impairments. In this research, ankle flexion and extension tasks at two force levels in both legs, were classified from cortical current sources estimated by a hierarchical variational Bayesian method, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings. The hierarchical prior for the current source estimation from EEG was obtained from activated brain areas and their intensities from an fMRI group (second-level) analysis. The fMRI group analysis was performed on regions of interest defined over the primary motor cortex, the supplementary motor area, and the somatosensory area, which are well-known to contribute to movement control. A sparse logistic regression method was applied for a nine-class classification (eight active tasks and a resting control task) obtaining a mean accuracy of 65.64% for time series of current sources, estimated from the EEG and the fMRI signals using a variational Bayesian method, and a mean accuracy of 22.19% for the classification of the pre-processed of EEG sensor signals, with a chance level of 11.11%. The higher classification accuracy of current sources, when compared to EEG classification accuracy, was attributed to the high number of sources and the different signal patterns obtained in the same vertex for different motor tasks. Since the inverse filter estimation for current sources can be done offline with the present method, the present method is applicable to real-time BCIs. Finally, due to the highly enhanced spatial distribution of current sources over the brain cortex, this method has the potential to identify activation patterns to design BCIs for the control of an affected limb in patients with stroke, or BCIs from motor imagery in patients with spinal cord injury.
NASA Astrophysics Data System (ADS)
Davenport, F., IV; Harrison, L.; Shukla, S.; Husak, G. J.; Funk, C. C.
2017-12-01
We evaluate the predictive accuracy of an ensemble of empirical model specifications that use earth observation data to predict sub-national grain yields in Mexico and East Africa. Products that are actively used for seasonal drought monitoring are tested as yield predictors. Our research is driven by the fact that East Africa is a region where decisions regarding agricultural production are critical to preventing the loss of economic livelihoods and human life. Regional grain yield forecasts can be used to anticipate availability and prices of key staples, which can turn can inform decisions about targeting humanitarian response such as food aid. Our objective is to identify-for a given region, grain, and time year- what type of model and/or earth observation can most accurately predict end of season yields. We fit a set of models to county level panel data from Mexico, Kenya, Sudan, South Sudan, and Somalia. We then examine out of sample predicative accuracy using various linear and non-linear models that incorporate spatial and time varying coefficients. We compare accuracy within and across models that use predictor variables from remotely sensed measures of precipitation, temperature, soil moisture, and other land surface processes. We also examine at what point in the season a given model or product is most useful for determining predictive accuracy. Finally we compare predictive accuracy across a variety of agricultural regimes including high intensity irrigated commercial agricultural and rain fed subsistence level farms.
Misawa, Masashi; Kudo, Shin-Ei; Mori, Yuichi; Takeda, Kenichi; Maeda, Yasuharu; Kataoka, Shinichi; Nakamura, Hiroki; Kudo, Toyoki; Wakamura, Kunihiko; Hayashi, Takemasa; Katagiri, Atsushi; Baba, Toshiyuki; Ishida, Fumio; Inoue, Haruhiro; Nimura, Yukitaka; Oda, Msahiro; Mori, Kensaku
2017-05-01
Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
Teaching acute care nurses cognitive assessment using LOCFAS: what's the best method?
Flannery, J; Land, K
2001-02-01
The Levels of Cognitive Functioning Assessment Scale (LOCFAS) is a behavioral checklist used by nurses in the acute care setting to assess the level of cognitive functioning in severely brain-injured patients in the early post-trauma period. Previous research studies have supported the reliability and validity of LOCFAS. For LOCFAS to become a more firmly established method of cognitive assessment, nurses must become familiar with and proficient in the use of this instrument. The purpose of this study was to find the most effective method of instruction by comparing three methods: a self-directed manual, a teaching video, and a classroom presentation. Videotaped vignettes of actual brain-injured patients were presented at the end of each training session, and participants were required to categorize these videotaped patients by using LOCFAS. High levels of reliability were observed for both the self-directed manual group and the teaching video group, but an overall lower level of reliability was observed for the classroom presentation group. Examination of the accuracy of overall LOCFAS ratings revealed a significant difference for instructional groups; the accuracy of the classroom presentation group was significantly lower than that of either the self-directed manual group or the teaching video group. The three instructional groups also differed on the average accuracy of ratings of the individual behaviors; the accuracy of the classroom presentation group was significantly lower than that of the teaching video group, whereas the self-directed manual group fell in between. Nurses also rated the instructional methods across a number of evaluative dimensions on a 5-point Likert-type scale. Evaluative statements ranged from average to good, with no significant differences among instructional methods.
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo; Lucchesi, David
2003-07-01
In this paper we analyse quantitatively the concept of LAGEOS-type satellites in critical supplementary orbit configuration (CSOC) which has proved capable of yielding various observables for many tests of general relativity in the terrestrial gravitational field, with particular emphasis on the measurement of the Lense-Thirring effect. By using an entirely new pair of LAGEOS-type satellites in identical, supplementary orbits with, e.g., semimajor axes a = 12 000 km, eccentricity e = 0.05 and inclinations iS1 = 63.4° and iS2 = 116.6°, it would be possible to cancel out the impact of the mismodelling of the static part of the gravitational field of the Earth to a very high level of accuracy. The departures from the ideal supplementary orbital configuration due to the orbital injection errors would yield systematic gravitational errors of the order of a few per cent, according to the covariance matrix of the EGM96 gravity model up to degree l = 20. However, the forthcoming, new gravity models from the CHAMP and GRACE missions should greatly improve the situation. So, it should be possible to measure the gravitomagnetic shifts of the sum of their nodes Σ\\dotΩLT with an accuracy level perhaps less than 1%, of the difference of their perigees Δ\\dotωLT with an accuracy level of 5% and of ≡ Σ\\dotΩLT - Δ\\dotωLT with an accuracy level of 2.8%. Such results, which are due to the non-gravitational perturbations mismodelling, have been obtained for an observational time span of about 6 years and could be further improved by fitting and removing from the analysed time series the major time-varying perturbations which have known periodicities.
Metal-on-Metal Total Hip Arthroplasty: Quality of Online Patient Information.
Crozier-Shaw, Geoff; Queally, Joseph M; Quinlan, John F
2017-03-01
Metal-on-metal total hip arthroplasty (THA) has generated much attention in the media because of early failure of certain implant systems. This study assessed the quality, accuracy, and readability of online information on metal-on-metal THA. The search terms "metal-on-metal hip replacement" and "metal hip replacement" were entered into the 3 most popular search engines. Information quality was assessed with the DISCERN score and a specific metal-on-metal THA content score. Accuracy of information was assessed with a customized score. Readability of the websites was assessed with the Flesch-Kincaid grade level score. A total of 61 unique websites were assessed. For 56% of websites, the target audience was patients. Media or medicolegal sources accounted for 44% of websites. As assessed by DISCERN (range, 16-80) and metal-on-metal THA (range, 0-25) scores, quality of the websites was moderate at best (47.1 and 9.6, respectively). Accuracy (range, 0-8) of the information presented also was moderate, with a mean score of 6.6. Media and medicolegal websites had the lowest scores for both quality and accuracy, despite making up the greatest proportion of sites assessed. Only 1 website (2%) had a Flesch-Kincaid grade level at or less than the recommended level of 8th grade. This study found that online information on metal-on-metal THA was of poor quality, often was inaccurate, and was presented at an inappropriately high reading level, particularly for media and medicolegal websites. Health care providers should counsel patients on the quality of information available and recommend appropriate online resources. [Orthopedics. 2017; 40(2):e262-e268.]. Copyright 2016, SLACK Incorporated.
Montskó, Gergely; Tarjányi, Zita; Mezősi, Emese; Kovács, Gábor L
2014-04-01
Blood cortisol level is routinely analysed in laboratory medicine, but the immunoassays in widespread use have the disadvantage of cross-reactivity with some commonly used steroid drugs. Mass spectrometry has become a method of increasing importance for cortisol estimation. However, current methods do not offer the option of accurate mass identification. Our objective was to develop a mass spectrometry method to analyse salivary, serum total, and serum free cortisol via accurate mass identification. The analysis was performed on a Bruker micrOTOF high-resolution mass spectrometer. Sample preparation involved protein precipitation, serum ultrafiltration, and solid-phase extraction. Limit of quantification was 12.5 nmol L(-1) for total cortisol, 440 pmol L(-1) for serum ultrafiltrate, and 600 pmol L(-1) for saliva. Average intra-assay variation was 4.7%, and inter-assay variation was 6.6%. Mass accuracy was <2.5 ppm. Serum total cortisol levels were in the range 35.6-1088 nmol L(-1), and serum free cortisol levels were in the range 0.5-12.4 nmol L(-1). Salivary cortisol levels were in the range 0.7-10.4 nmol L(-1). Mass accuracy was equal to or below 2.5 ppm, resulting in a mass error less than 1 mDa and thus providing high specificity. We did not observe any interference with routinely used steroidal drugs. The method is capable of specific cortisol quantification in different matrices on the basis of accurate mass identification.
[An optical-fiber-sensor-based spectrophotometer for soil non-metallic nutrient determination].
He, Dong-xian; Hu, Juan-xiu; Lu, Shao-kun; He, Hou-yong
2012-01-01
In order to achieve rapid, convenient and efficient soil nutrient determination in soil testing and fertilizer recommendation, a portable optical-fiber-sensor-based spectrophotometer including immersed fiber sensor, flat field holographic concave grating, and diode array detector was developed for soil non-metallic nutrient determination. According to national standard of ultraviolet and visible spectrophotometer with JJG 178-2007, the wavelength accuracy and repeatability, baseline stability, transmittance accuracy and repeatability measured by the prototype instrument were satisfied with the national standard of III level; minimum spectral bandwidth, noise and excursion, and stray light were satisfied with the national standard of IV level. Significant linear relationships with slope of closing to 1 were found between the soil available nutrient contents including soil nitrate nitrogen, ammonia nitrogen, available phosphorus, available sulfur, available boron, and organic matter measured by the prototype instrument compared with that measured by two commercial single-beam-based and dual-beam-based spectrophotometers. No significant differences were revealed from the above comparison data. Therefore, the optical-fiber-sensor-based spectrophotometer can be used for rapid soil non-metallic nutrient determination with a high accuracy.
Human supervision and microprocessor control of an optical tracking system
NASA Technical Reports Server (NTRS)
Bigley, W. J.; Vandenberg, J. D.
1981-01-01
Gunners using small calibre anti-aircraft systems have not been able to track high-speed air targets effectively. Substantial improvement in the accuracy of surface fire against attacking aircraft has been realized through the design of a director-type weapon control system. This system concept frees the gunner to exercise a supervisory/monitoring role while the computer takes over continuous target tracking. This change capitalizes on a key consideration of human factors engineering while increasing system accuracy. The advanced system design, which uses distributed microprocessor control, is discussed at the block diagram level and is contrasted with the previous implementation.
Error-proneness as a handicap signal.
De Jaegher, Kris
2003-09-21
This paper describes two discrete signalling models in which the error-proneness of signals can serve as a handicap signal. In the first model, the direct handicap of sending a high-quality signal is not large enough to assure that a low-quality signaller will not send it. However, if the receiver sometimes mistakes a high-quality signal for a low-quality one, then there is an indirect handicap to sending a high-quality signal. The total handicap of sending such a signal may then still be such that a low-quality signaller would not want to send it. In the second model, there is no direct handicap of sending signals, so that nothing would seem to stop a signaller from always sending a high-quality signal. However, the receiver sometimes fails to detect signals, and this causes an indirect handicap of sending a high-quality signal that still stops the low-quality signaller of sending such a signal. The conditions for honesty are that the probability of an error of detection is higher for a high-quality than for a low-quality signal, and that the signaller who does not detect a signal adopts a response that is bad to the signaller. In both our models, we thus obtain the result that signal accuracy should not lie above a certain level in order for honest signalling to be possible. Moreover, we show that the maximal accuracy that can be achieved is higher the lower the degree of conflict between signaller and receiver. As well, we show that it is the conditions for honest signalling that may be constraining signal accuracy, rather than the signaller trying to make honest signals as effective as possible given receiver psychology, or the signaller adapting the accuracy of honest signals depending on his interests.
High-resolution method for evolving complex interface networks
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data
NASA Astrophysics Data System (ADS)
d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter
2010-12-01
IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.
DOT National Transportation Integrated Search
2011-07-01
The use of lane assistance systems can reduce the stress levels experienced by drivers and allow for better lane : keeping in narrow, bus-dedicated lanes. In 2008, the Intelligent Vehicles (IV) Lab at the University of Minnesota : has developed such ...
Can Personal Exposures to Higher Nighttime and Early Morning Temperatures Increase Blood Pressure?
Environmental temperatures are inversely related to BP; however, the effects of short-term temperature changes within a 24-hour period and measured with high accuracy at the personal level have not been described. Fifty-one nonsmoking patients living in the Detroit area had up to...
ERIC Educational Resources Information Center
Zhang, Dake; Stecker, Pamela; Huckabee, Sloan; Miller, Rhonda
2016-01-01
Research has suggested that different strategies used when solving fraction problems are highly correlated with students' problem-solving accuracy. This study (a) utilized latent profile modeling to classify students into three different strategic developmental levels in solving fraction comparison problems and (b) accordingly provided…
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris
2016-04-01
The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence time in PPP static and kinematic solutions compared to GPS-only PPP solutions for various observational session durations. However, this is mostly observed when the visibility of Galileo and BeiDou satellites is substantially long within an observational session. In GPS-only cases dealing with data from high elevation cut-off angles, the number of GPS satellites decreases dramatically, leading to a position accuracy and convergence time deviating from satisfactory geodetic thresholds. By contrast, respective multi-GNSS PPP solutions not only show improvement, but also lead to geodetic level accuracies even in 30° elevation cut-off. Finally, the GPS ambiguity resolution in PPP processing is investigated using the GPS satellite wide-lane fractional cycle biases, which are included in the clock products by CNES. It is shown that their addition shortens the convergence time and increases the position accuracy of PPP solutions, especially in kinematic mode. Analogous improvement is obtained in respective multi-GNSS solutions, even though the GLONASS, Galileo and BeiDou ambiguities remain float, since information about them is not provided in the clock products available to date.
Exploration of the Components of Children's Reading Comprehension Using Rauding Theory.
ERIC Educational Resources Information Center
Rupley, William H.; And Others
A study explored an application of rauding theory to the developmental components that contribute to elementary-age children's reading comprehension. The relationships among cognitive power, auditory accuracy level, pronunciation (word recognition) level, rauding (comprehension) accuracy level, rauding rate (reading rate) level, and rauding…
Hasslacher, Christoph; Kulozik, Felix; Platten, Isabel
2014-05-01
We investigated the analytical accuracy of 27 glucose monitoring systems (GMS) in a clinical setting, using the new ISO accuracy limits. In addition to measuring accuracy at blood glucose (BG) levels < 100 mg/dl and > 100 mg/dl, we also analyzed devices performance with respect to these criteria at 5 specific BG level ranges, making it possible to further differentiate between devices with regard to overall performance. Carbohydrate meals and insulin injections were used to induce an increase or decrease in BG levels in 37 insulin-dependent patients. Capillary blood samples were collected at 10-minute intervals, and BG levels determined simultaneously using GMS and a laboratory-based method. Results obtained via both methods were analyzed according to the new ISO criteria. Only 12 of 27 devices tested met overall requirements of the new ISO accuracy limits. When accuracy was assessed at BG levels < 100 mg/dl and > 100 mg/dl, criteria were met by 14 and 13 devices, respectively. A more detailed analysis involving 5 different BG level ranges revealed that 13 (48.1%) devices met the required criteria at BG levels between 50 and 150 mg/dl, whereas 19 (70.3%) met these criteria at BG levels above 250 mg/dl. The overall frequency of outliers was low. The assessment of analytical accuracy of GMS at a number of BG level ranges made it possible to further differentiate between devices with regard to overall performance, a process that is of particular importance given the user-centered nature of the devices' intended use. © 2014 Diabetes Technology Society.
Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan
2013-11-01
Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.
High-resolution mapping of vehicle emissions in China in 2008
NASA Astrophysics Data System (ADS)
Zheng, B.; Huo, H.; Zhang, Q.; Yao, Z. L.; Wang, X. T.; Yang, X. F.; Liu, H.; He, K. B.
2014-09-01
This study is the first in a series of papers that aim to develop high-resolution emission databases for different anthropogenic sources in China. Here we focus on on-road transportation. Because of the increasing impact of on-road transportation on regional air quality, developing an accurate and high-resolution vehicle emission inventory is important for both the research community and air quality management. This work proposes a new inventory methodology to improve the spatial and temporal accuracy and resolution of vehicle emissions in China. We calculate, for the first time, the monthly vehicle emissions for 2008 in 2364 counties (an administrative unit one level lower than city) by developing a set of approaches to estimate vehicle stock and monthly emission factors at county-level, and technology distribution at provincial level. We then introduce allocation weights for the vehicle kilometers traveled to assign the county-level emissions onto 0.05° × 0.05° grids based on the China Digital Road-network Map (CDRM). The new methodology overcomes the common shortcomings of previous inventory methods, including neglecting the geographical differences between key parameters and using surrogates that are weakly related to vehicle activities to allocate vehicle emissions. The new method has great advantages over previous methods in depicting the spatial distribution characteristics of vehicle activities and emissions. This work provides a better understanding of the spatial representation of vehicle emissions in China and can benefit both air quality modeling and management with improved spatial accuracy.
Nedelcu, Robert; Olsson, Pontus; Nyström, Ingela; Thor, Andreas
2018-02-23
Several studies have evaluated accuracy of intraoral scanners (IOS), but data is lacking regarding variations between IOS systems in the depiction of the critical finish line and the finish line accuracy. The aim of this study was to analyze the level of finish line distinctness (FLD), and finish line accuracy (FLA), in 7 intraoral scanners (IOS) and one conventional impression (IMPR). Furthermore, to assess parameters of resolution, tessellation, topography, and color. A dental model with a crown preparation including supra and subgingival finish line was reference-scanned with an industrial scanner (ATOS), and scanned with seven IOS: 3M, CS3500 and CS3600, DWIO, Omnicam, Planscan and Trios. An IMPR was taken and poured, and the model was scanned with a laboratory scanner. The ATOS scan was cropped at finish line and best-fit aligned for 3D Compare Analysis (Geomagic). Accuracy was visualized, and descriptive analysis was performed. All IOS, except Planscan, had comparable overall accuracy, however, FLD and FLA varied substantially. Trios presented the highest FLD, and with CS3600, the highest FLA. 3M, and DWIO had low overall FLD and low FLA in subgingival areas, whilst Planscan had overall low FLD and FLA, as well as lower general accuracy. IMPR presented high FLD, except in subgingival areas, and high FLA. Trios had the highest resolution by factor 1.6 to 3.1 among IOS, followed by IMPR, DWIO, Omnicam, CS3500, 3M, CS3600 and Planscan. Tessellation was found to be non-uniform except in 3M and DWIO. Topographic variation was found for 3M and Trios, with deviations below +/- 25 μm for Trios. Inclusion of color enhanced the identification of the finish line in Trios, Omnicam and CS3600, but not in Planscan. There were sizeable variations between IOS with both higher and lower FLD and FLA than IMPR. High FLD was more related to high localized finish line resolution and non-uniform tessellation, than to high overall resolution. Topography variations were low. Color improved finish line identification in some IOS. It is imperative that clinicians critically evaluate the digital impression, being aware of varying technical limitations among IOS, in particular when challenging subgingival conditions apply.
NASA Technical Reports Server (NTRS)
Luthcke, Scott; Rowlands, David; Lemoine, Frank; Zelensky, Nikita; Beckley, Brian; Klosko, Steve; Chinn, Doug
2006-01-01
Although satellite altimetry has been around for thirty years, the last fifteen beginning with the launch of TOPEX/Poseidon (TP) have yielded an abundance of significant results including: monitoring of ENS0 events, detection of internal tides, determination of accurate global tides, unambiguous delineation of Rossby waves and their propagation characteristics, accurate determination of geostrophic currents, and a multi-decadal time series of mean sea level trend and dynamic ocean topography variability. While the high level of accuracy being achieved is a result of both instrument maturity and the quality of models and correction algorithms applied to the data, improving the quality of the Climate Data Records produced from altimetry is highly dependent on concurrent progress being made in fields such as orbit determination. The precision orbits form the reference frame from which the radar altimeter observations are made. Therefore, the accuracy of the altimetric mapping is limited to a great extent by the accuracy to which a satellite orbit can be computed. The TP mission represents the first time that the radial component of an altimeter orbit was routinely computed with an accuracy of 2-cm. Recently it has been demonstrated that it is possible to compute the radial component of Jason orbits with an accuracy of better than 1-cm. Additionally, still further improvements in TP orbits are being achieved with new techniques and algorithms largely developed from combined Jason and TP data analysis. While these recent POD achievements are impressive, the new accuracies are now revealing subtle systematic orbit error that manifest as both intra and inter annual ocean topography errors. Additionally the construction of inter-decadal time series of climate data records requires the removal of systematic differences across multiple missions. Current and future efforts must focus on the understanding and reduction of these errors in order to generate a complete and consistent time series of improved orbits across multiple missions and decades required for the most stringent climate-related research. This presentation discusses the POD progress and achievements made over nearly three decades, and presents the future challenges, goals and their impact on altimetric derived ocean sciences.
Salivary progesterone and cervical length measurement as predictors of spontaneous preterm birth.
Maged, Ahmed M; Mohesen, Mohamed; Elhalwagy, Ahmed; Abdelhafiz, Ali
2015-07-01
To evaluate the efficacy of salivary progesterone, cervical length measurement in predicting preterm birth (PTB). Prospective observational study included 240 pregnant women with gestational age (GA) 26-34 weeks classified into two equal groups; group one are high risk for PTB (those with symptoms of uterine contractions or history of one or more spontaneous preterm delivery or second trimester abortion) and group 2 are controls. There was a highly significant difference between the two study groups regarding GA at delivery (31.3 ± 3.75 in high risk versus 38.5 ± 1.3 in control), cervical length measured by transvaginal ultrasound (24.7 ± 8.6 in high risk versus 40.1 ± 4.67 in control) and salivary progesterone level (728.9 ± 222.3 in high risk versus 1099.9 ± 189.4 in control; p < 0.001). There was a statistically significant difference between levels of salivary progesterone at different GA among the high risk group (p value 0.035) but not in low risk group (p value 0.492). CL measurement showed a sensitivity of 71.5% with 100% specificity, 100% PPV, 69.97% NPV and accuracy of 83%, while salivary progesterone showed a sensitivity of 84% with 90% specificity, 89.8% PPV, 85.9% NPV and accuracy of 92.2%. The measurement of both salivary progesterone and cervical length are good predictors for development of PTB.
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu
2016-10-01
As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.
[THE VERIFICATION OF ANALYTICAL CHARACTERISTICS OF THREE MODELS OF GLUCOMETERS].
Timofeev, A V; Khaibulina, E T; Mamonov, R A; Gorst, K A
2016-01-01
The individual portable systems of control of glucose level in blood commonly known as glucometers permit to patients with diabetes mellitus to independently correct pharmaceutical therapy. The effectiveness of this correction depends on accuracy of control of glucose level. The evaluation was implemented concerning minimal admissible accuracy and clinical accuracy of control of glucose level of devices Contour TC, Satellite Express and One Touch Select according standards expounded in GOST 15197-2011 and international standard ISO 15197-2013. It is demonstrated that Contour TC and One Touch Select meet the requirements of these standards in part of accuracy while Satellite Express does not.
NASA Astrophysics Data System (ADS)
Schuldt, T.; Gohlke, M.; Kögel, H.; Spannagel, R.; Peters, A.; Johann, U.; Weise, D.; Braxmaier, C.
2012-05-01
A high-sensitivity heterodyne interferometer implementing differential wavefront sensing for tilt measurement was developed over the last few years. With this setup, using an aluminium breadboard and compact optical mounts with a beam height of 2 cm, noise levels less than 5 pm Hz-1/2 in translation and less than 10 nrad Hz-1/2 in tilt measurement, both for frequencies above 10-2 Hz, have been demonstrated. Here, a new, compact and ruggedized interferometer setup utilizing a baseplate made of Zerodur, a thermally and mechanically highly stable glass ceramic with a coefficient of thermal expansion (CTE) of 2 × 10-8 K-1, is presented. The optical components are fixed to the baseplate using a specifically developed, easy-to-handle, assembly-integration technology based on a space-qualified two-component epoxy. While developed as a prototype for future applications aboard satellite space missions (such as Laser Interferometer Space Antenna), the interferometer is used in laboratory experiments for dilatometry and surface metrology. A first dilatometer setup with a demonstrated accuracy of 10-7 K-1 in CTE measurement was realized. As it was seen that the accuracy is limited by the dimensional stability of the sample tube support, a new setup was developed utilizing Zerodur as structural material for the sample tube support. In another activity, the interferometer is used for characterization of high-quality mirror surfaces at the picometre level and for high-accuracy two-dimensional surface characterization in a prototype for industrial applications. In this paper, the corresponding designs, their realizations and first measurements of both applications in dilatometry and surface metrology are presented.
Bertoux, Maxime; de Souza, Leonardo Cruz; O'Callaghan, Claire; Greve, Andrea; Sarazin, Marie; Dubois, Bruno; Hornberger, Michael
2016-01-01
Relative sparing of episodic memory is a diagnostic criterion of behavioral variant frontotemporal dementia (bvFTD). However, increasing evidence suggests that bvFTD patients can show episodic memory deficits at a similar level as Alzheimer's disease (AD). Social cognition tasks have been proposed to distinguish bvFTD, but no study to date has explored the utility of such tasks for the diagnosis of amnestic bvFTD. Here, we contrasted social cognition performance of amnestic and non-amnestic bvFTD from AD, with a subgroup having confirmed in vivo pathology markers. Ninety-six participants (38 bvFTD and 28 AD patients as well as 30 controls) performed the short Social-cognition and Emotional Assessment (mini-SEA). BvFTD patients were divided into amnestic versus non-amnestic presentation using the validated Free and Cued Selective Reminding Test (FCSRT) assessing episodic memory. As expected, the accuracy of the FCSRT to distinguish the overall bvFTD group from AD was low (69.7% ) with ∼50% of bvFTD patients being amnestic. By contrast, the diagnostic accuracy of the mini-SEA was high (87.9% ). When bvFTD patients were split on the level of amnesia, mini-SEA diagnostic accuracy remained high (85.1% ) for amnestic bvFTD versus AD and increased to very high (93.9% ) for non-amnestic bvFTD versus AD. Social cognition deficits can distinguish bvFTD and AD regardless of amnesia to a high degree and provide a simple way to distinguish both diseases at presentation. These findings have clear implications for the diagnostic criteria of bvFTD. They suggest that the emphasis should be on social cognition deficits with episodic memory deficits not being a helpful diagnostic criterion in bvFTD.
Bourier, Felix; Hessling, Gabriele; Ammar-Busch, Sonia; Kottmaier, Marc; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Semmler, Verena; Lennerz, Carsten; Schneider, Christine; Kolb, Christof; Deisenhofer, Isabel; Reents, Tilko
2016-03-01
Contact-force (CF) sensing catheters are increasingly used in clinical electrophysiological practice due to their efficacy and safety profile. As data about the accuracy of this technology are scarce, we sought to quantify accuracy based on in vitro experiments. A custom-made force sensor was constructed that allowed exact force reference measurements registered via a flexible membrane. A Smarttouch Surround Flow (ST SF) ablation catheter (Biosense Webster, Diamond Bar, CA, USA) was brought in contact with the membrane of the force sensor in order to compare the ST SF force measurements to force sensor reference measurements. ST SF force sensing technology is based on deflection registration between the distal and proximal catheter tip. The experiment was repeated for n = 10 ST SF catheters, which showed no significant difference in accuracy levels. A series of measurements (n = 1200) was carried out for different angles of force acting to the catheter tip (0°/perpendicular contact, 30°, 60°, 90°/parallel contact). The mean absolute differences between reference and ST SF measurements were 1.7 ± 1.8 g (0°), 1.6 ± 1.2 g (30°), 1.4 ± 1.3 g (60°), and 6.6 ± 5.9 g (90°). Measurement accuracy was significantly higher in non-parallel contact when compared with parallel contact (P < 0.01). Catheter force measurements using the ST SF catheters show a high level of accuracy regarding differences to reference measurements and reproducibility. The reduced accuracy in measurements of 90° acting forces (parallel contact) might be clinically important when creating, for example, linear lesions. © 2015 Wiley Periodicals, Inc.
Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.
2009-01-01
Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieu, Richard
A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this methodmore » of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.« less
NASA Technical Reports Server (NTRS)
Mulligan, P. J.; Gervin, J. C.; Lu, Y. C.
1985-01-01
An area bordering the Eastern Shore of the Chesapeake Bay was selected for study and classified using unsupervised techniques applied to LANDSAT-2 MSS data and several band combinations of LANDSAT-4 TM data. The accuracies of these Level I land cover classifications were verified using the Taylor's Island USGS 7.5 minute topographic map which was photointerpreted, digitized and rasterized. The the Taylor's Island map, comparing the MSS and TM three band (2 3 4) classifications, the increased resolution of TM produced a small improvement in overall accuracy of 1% correct due primarily to a small improvement, and 1% and 3%, in areas such as water and woodland. This was expected as the MSS data typically produce high accuracies for categories which cover large contiguous areas. However, in the categories covering smaller areas within the map there was generally an improvement of at least 10%. Classification of the important residential category improved 12%, and wetlands were mapped with 11% greater accuracy.
Elly E. Holcombe; Duane G. Moore; Richard L. Fredriksen
1986-01-01
A modification of the macro-Kjeldahl method that provides increased sensitivity was developed for determining very low levels of nitrogen in forest streams and in rain-water. The method is suitable as a routine laboratory procedure. Analytical range of the method is 0.02 to 1.5 mg/L with high recovery and excellent precision and ac-curacy. The range can be increased to...
Evaluation of Low-Cost, Centimeter-Level Accuracy OEM GNSS Receivers
DOT National Transportation Integrated Search
2018-02-02
This report discusses the results of a study to quantify the performance of low-cost, centimeter-level accurate Global Navigation Satellite Systems (GNSS) receivers that have appeared on the market in the last few years. Centimeter-level accuracy is ...
Cleveland, M A; Hickey, J M
2013-08-01
Genomic selection can be implemented in pig breeding at a reduced cost using genotype imputation. Accuracy of imputation and the impact on resulting genomic breeding values (gEBV) was investigated. High-density genotype data was available for 4,763 animals from a single pig line. Three low-density genotype panels were constructed with SNP densities of 450 (L450), 3,071 (L3k) and 5,963 (L6k). Accuracy of imputation was determined using 184 test individuals with no genotyped descendants in the data but with parents and grandparents genotyped using the Illumina PorcineSNP60 Beadchip. Alternative genotyping scenarios were created in which parents, grandparents, and individuals that were not direct ancestors of test animals (Other) were genotyped at high density (S1), grandparents were not genotyped (S2), dams and granddams were not genotyped (S3), and dams and granddams were genotyped at low density (S4). Four additional scenarios were created by excluding Other animal genotypes. Test individuals were always genotyped at low density. Imputation was performed with AlphaImpute. Genomic breeding values were calculated using the single-step genomic evaluation. Test animals were evaluated for the information retained in the gEBV, calculated as the correlation between gEBV using imputed genotypes and gEBV using true genotypes. Accuracy of imputation was high for all scenarios but decreased with fewer SNP on the low-density panel (0.995 to 0.965 for S1) and with reduced genotyping of ancestors, where the largest changes were for L450 (0.965 in S1 to 0.914 in S3). Exclusion of genotypes for Other animals resulted in only small accuracy decreases. Imputation accuracy was not consistent across the genome. Information retained in the gEBV was related to genotyping scenario and thus to imputation accuracy. Reducing the number of SNP on the low-density panel reduced the information retained in the gEBV, with the largest decrease observed from L3k to L450. Excluding Other animal genotypes had little impact on imputation accuracy but caused large decreases in the information retained in the gEBV. These results indicate that accuracy of gEBV from imputed genotypes depends on the level of genotyping in close relatives and the size of the genotyped dataset. Fewer high-density genotyped individuals are needed to obtain accurate imputation than are needed to obtain accurate gEBV. Strategies to optimize development of low-density panels can improve both imputation and gEBV accuracy.
A real-time spectral mapper as an emerging diagnostic technology in biomedical sciences.
Epitropou, George; Kavvadias, Vassilis; Iliou, Dimitris; Stathopoulos, Efstathios; Balas, Costas
2013-01-01
Real time spectral imaging and mapping at video rates can have tremendous impact not only on diagnostic sciences but also on fundamental physiological problems. We report the first real-time spectral mapper based on the combination of snap-shot spectral imaging and spectral estimation algorithms. Performance evaluation revealed that six band imaging combined with the Wiener algorithm provided high estimation accuracy, with error levels lying within the experimental noise. High accuracy is accompanied with much faster, by 3 orders of magnitude, spectral mapping, as compared with scanning spectral systems. This new technology is intended to enable spectral mapping at nearly video rates in all kinds of dynamic bio-optical effects as well as in applications where the target-probe relative position is randomly and fast changing.
Marsic, Damien; Méndez-Gómez, Héctor R; Zolotukhin, Sergei
2015-01-01
Biodistribution analysis is a key step in the evaluation of adeno-associated virus (AAV) capsid variants, whether natural isolates or produced by rational design or directed evolution. Indeed, when screening candidate vectors, accurate knowledge about which tissues are infected and how efficiently is essential. We describe the design, validation, and application of a new vector, pTR-UF50-BC, encoding a bioluminescent protein, a fluorescent protein and a DNA barcode, which can be used to visualize localization of transduction at the organism, organ, tissue, or cellular levels. In addition, by linking capsid variants to different barcoded versions of the vector and amplifying the barcode region from various tissue samples using barcoded primers, biodistribution of viral genomes can be analyzed with high accuracy and efficiency.
Plume interference with space shuttle range safety signals
NASA Technical Reports Server (NTRS)
Boynton, F. P.; Rajaseknar, P. S.
1979-01-01
The computational procedure for signal propagation in the presence of an exhaust plume is presented. Comparisons with well-known analytic diffraction solutions indicate that accuracy suffers when mesh spacing is inadequate to resolve the first unobstructed Fresnel zone at the plume edge. Revisions to the procedure to improve its accuracy without requiring very large arrays are discussed. Comparisons to field measurements during a shuttle solid rocket motor (SRM) test firing suggest that the plume is sharper edged than one would expect on the basis of time averaged electron density calculations. The effects, both of revisions to the computational procedure and of allowing for a sharper plume edge, are to raise the signal level near tail aspect. The attenuation levels then predicted are still high enough to be of concern near SRM burnout for northerly launches of the space shuttle.
Fulford, Janice M.; Clayton, Christopher S.
2015-10-09
The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.
NASA Astrophysics Data System (ADS)
Green, K. N.; van Alstine, R. L.
This paper presents the current performance levels of the SDG-5 gyro, a high performance two-axis dynamically tuned gyro, and the DRIRU II redundant inertial reference unit relating to stabilization and pointing applications. Also presented is a discussion of a product improvement program aimed at further noise reductions to meet the demanding requirements of future space defense applications.
Drung, D; Krause, C; Becker, U; Scherer, H; Ahlers, F J
2015-02-01
An ultrastable low-noise current amplifier (ULCA) is presented. The ULCA is a non-cryogenic instrument based on specially designed operational amplifiers and resistor networks. It involves two stages, the first providing a 1000-fold current gain and the second performing a current-to-voltage conversion via an internal 1 MΩ reference resistor or, optionally, an external standard resistor. The ULCA's transfer coefficient is highly stable versus time, temperature, and current amplitude within the full dynamic range of ±5 nA. The low noise level of 2.4 fA/√Hz helps to keep averaging times short at small input currents. A cryogenic current comparator is used to calibrate both input current gain and output transresistance, providing traceability to the quantum Hall effect. Within one week after calibration, the uncertainty contribution from short-term fluctuations and drift of the transresistance is about 0.1 parts per million (ppm). The long-term drift is typically 5 ppm/yr. A high-accuracy variant is available that shows improved stability of the input gain at the expense of a higher noise level of 7.5 fA/√Hz. The ULCA also allows the traceable generation of small electric currents or the calibration of high-ohmic resistors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drung, D.; Krause, C.; Becker, U.
2015-02-15
An ultrastable low-noise current amplifier (ULCA) is presented. The ULCA is a non-cryogenic instrument based on specially designed operational amplifiers and resistor networks. It involves two stages, the first providing a 1000-fold current gain and the second performing a current-to-voltage conversion via an internal 1 MΩ reference resistor or, optionally, an external standard resistor. The ULCA’s transfer coefficient is highly stable versus time, temperature, and current amplitude within the full dynamic range of ±5 nA. The low noise level of 2.4 fA/√Hz helps to keep averaging times short at small input currents. A cryogenic current comparator is used to calibratemore » both input current gain and output transresistance, providing traceability to the quantum Hall effect. Within one week after calibration, the uncertainty contribution from short-term fluctuations and drift of the transresistance is about 0.1 parts per million (ppm). The long-term drift is typically 5 ppm/yr. A high-accuracy variant is available that shows improved stability of the input gain at the expense of a higher noise level of 7.5 fA/√Hz. The ULCA also allows the traceable generation of small electric currents or the calibration of high-ohmic resistors.« less
NASA Astrophysics Data System (ADS)
Drung, D.; Krause, C.; Becker, U.; Scherer, H.; Ahlers, F. J.
2015-02-01
An ultrastable low-noise current amplifier (ULCA) is presented. The ULCA is a non-cryogenic instrument based on specially designed operational amplifiers and resistor networks. It involves two stages, the first providing a 1000-fold current gain and the second performing a current-to-voltage conversion via an internal 1 MΩ reference resistor or, optionally, an external standard resistor. The ULCA's transfer coefficient is highly stable versus time, temperature, and current amplitude within the full dynamic range of ±5 nA. The low noise level of 2.4 fA/√Hz helps to keep averaging times short at small input currents. A cryogenic current comparator is used to calibrate both input current gain and output transresistance, providing traceability to the quantum Hall effect. Within one week after calibration, the uncertainty contribution from short-term fluctuations and drift of the transresistance is about 0.1 parts per million (ppm). The long-term drift is typically 5 ppm/yr. A high-accuracy variant is available that shows improved stability of the input gain at the expense of a higher noise level of 7.5 fA/√Hz. The ULCA also allows the traceable generation of small electric currents or the calibration of high-ohmic resistors.
Design on wireless auto-measurement system for lead rail straightness measurement based on PSD
NASA Astrophysics Data System (ADS)
Yan, Xiugang; Zhang, Shuqin; Dong, Dengfeng; Cheng, Zhi; Wu, Guanghua; Wang, Jie; Zhou, Weihu
2016-10-01
Straightness detection is not only one of the key technologies for the product quality and installation accuracy of all types of lead rail, but also an important dimensional measurement technology. The straightness measuring devices now available have disadvantages of low automation level, limiting by measuring environment, and low measurement efficiency. In this paper, a wireless measurement system for straightness detection based on position sensitive detector (PSD) is proposed. The system has some advantage of high automation-level, convenient, high measurement efficiency, easy to transplanting and expanding, and can detect straightness of lead rail in real-time.
Head-Disk Interface Technology: Challenges and Approaches
NASA Astrophysics Data System (ADS)
Liu, Bo
Magnetic hard disk drive (HDD) technology is believed to be one of the most successful examples of modern mechatronics systems. The mechanical beauty of magnetic HDD includes simple but super high accuracy positioning head, positioning technology, high speed and stability spindle motor technology, and head-disk interface technology which keeps the millimeter sized slider flying over a disk surface at nanometer level slider-disk spacing. This paper addresses the challenges and possible approaches on how to further reduce the slider disk spacing whilst retaining the stability and robustness level of head-disk systems for future advanced magnetic disk drives.
Laboratory and field tests of the Sutron RLR-0003-1 water level sensor
Fulford, Janice M.; Bryars, R. Scott
2015-01-01
Three Sutron RLR-0003-1 water level sensors were tested in laboratory conditions to evaluate the accuracy of the sensor over the manufacturer’s specified operating temperature and distance-to-water ranges. The sensor was also tested for compliance to SDI-12 communication protocol and in field conditions at a U.S. Geological Survey (USGS) streamgaging site. Laboratory results were compared to the manufacturer’s accuracy specification for water level and to the USGS Office of Surface Water (OSW) policy requirement that water level sensors have a measurement uncertainty of no more than 0.01 foot or 0.20 percent of the indicated reading. Except for one sensor, the differences for the temperature testing were within 0.05 foot and the average measurements for the sensors were within the manufacturer’s accuracy specification. Two of the three sensors were within the manufacturer’s specified accuracy and met the USGS accuracy requirements for the laboratory distance to water testing. Three units passed a basic SDI-12 communication compliance test. Water level measurements made by the Sutron RLR-0003-1 during field testing agreed well with those made by the bubbler system and a Design Analysis Associates (DAA) H3613 radar, and they met the USGS accuracy requirements when compared to the wire-weight gage readings.
A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy
NASA Astrophysics Data System (ADS)
Krebs, C. R.; Li, Ling; Wolberg, Alisa S.; Oldenburg, Amy L.
2015-07-01
Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young's modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa-27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young's modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring.
NASA Astrophysics Data System (ADS)
Wijesingha, J. S. J.; Deshapriya, N. L.; Samarakoon, L.
2015-04-01
Billions of people in the world depend on rice as a staple food and as an income-generating crop. Asia is the leader in rice cultivation and it is necessary to maintain an up-to-date rice-related database to ensure food security as well as economic development. This study investigates general applicability of high temporal resolution Moderate Resolution Imaging Spectroradiometer (MODIS) 250m gridded vegetation product for monitoring rice crop growth, mapping rice crop acreage and analyzing crop yield, at the province-level. The MODIS 250m Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) time series data, field data and crop calendar information were utilized in this research in Sa Kaeo Province, Thailand. The following methodology was used: (1) data pre-processing and rice plant growth analysis using Vegetation Indices (VI) (2) extraction of rice acreage and start-of-season dates from VI time series data (3) accuracy assessment, and (4) yield analysis with MODIS VI. The results show a direct relationship between rice plant height and MODIS VI. The crop calendar information and the smoothed NDVI time series with Whittaker Smoother gave high rice acreage estimation (with 86% area accuracy and 75% classification accuracy). Point level yield analysis showed that the MODIS EVI is highly correlated with rice yield and yield prediction using maximum EVI in the rice cycle predicted yield with an average prediction error 4.2%. This study shows the immense potential of MODIS gridded vegetation product for keeping an up-to-date Geographic Information System of rice cultivation.
A portable blood plasma clot micro-elastometry device based on resonant acoustic spectroscopy.
Krebs, C R; Li, Ling; Wolberg, Alisa S; Oldenburg, Amy L
2015-07-01
Abnormal blood clot stiffness is an important indicator of coagulation disorders arising from a variety of cardiovascular diseases and drug treatments. Here, we present a portable instrument for elastometry of microliter volume blood samples based upon the principle of resonant acoustic spectroscopy, where a sample of well-defined dimensions exhibits a fundamental longitudinal resonance mode proportional to the square root of the Young's modulus. In contrast to commercial thromboelastography, the resonant acoustic method offers improved repeatability and accuracy due to the high signal-to-noise ratio of the resonant vibration. We review the measurement principles and the design of a magnetically actuated microbead force transducer applying between 23 pN and 6.7 nN, providing a wide dynamic range of elastic moduli (3 Pa-27 kPa) appropriate for measurement of clot elastic modulus (CEM). An automated and portable device, the CEMport, is introduced and implemented using a 2 nm resolution displacement sensor with demonstrated accuracy and precision of 3% and 2%, respectively, of CEM in biogels. Importantly, the small strains (<0.13%) and low strain rates (<1/s) employed by the CEMport maintain a linear stress-to-strain relationship which provides a perturbative measurement of the Young's modulus. Measurements of blood plasma CEM versus heparin concentration show that CEMport is sensitive to heparin levels below 0.050 U/ml, which suggests future applications in sensing heparin levels of post-surgical cardiopulmonary bypass patients. The portability, high accuracy, and high precision of this device enable new clinical and animal studies for associating CEM with blood coagulation disorders, potentially leading to improved diagnostics and therapeutic monitoring.
Hsieh, Shulan; Li, Tzu-Hsien; Tsai, Ling-Ling
2010-04-01
To examine whether monetary incentives attenuate the negative effects of sleep deprivation on cognitive performance in a flanker task that requires higher-level cognitive-control processes, including error monitoring. Twenty-four healthy adults aged 18 to 23 years were randomly divided into 2 subject groups: one received and the other did not receive monetary incentives for performance accuracy. Both subject groups performed a flanker task and underwent electroencephalographic recordings for event-related brain potentials after normal sleep and after 1 night of total sleep deprivation in a within-subject, counterbalanced, repeated-measures study design. Monetary incentives significantly enhanced the response accuracy and reaction time variability under both normal sleep and sleep-deprived conditions, and they reduced the effects of sleep deprivation on the subjective effort level, the amplitude of the error-related negativity (an error-related event-related potential component), and the latency of the P300 (an event-related potential variable related to attention processes). However, monetary incentives could not attenuate the effects of sleep deprivation on any measures of behavior performance, such as the response accuracy, reaction time variability, or posterror accuracy adjustments; nor could they reduce the effects of sleep deprivation on the amplitude of the Pe, another error-related event-related potential component. This study shows that motivation incentives selectively reduce the effects of total sleep deprivation on some brain activities, but they cannot attenuate the effects of sleep deprivation on performance decrements in tasks that require high-level cognitive-control processes. Thus, monetary incentives and sleep deprivation may act through both common and different mechanisms to affect cognitive performance.
Overlay accuracy on a flexible web with a roll printing process based on a roll-to-roll system.
Chang, Jaehyuk; Lee, Sunggun; Lee, Ki Beom; Lee, Seungjun; Cho, Young Tae; Seo, Jungwoo; Lee, Sukwon; Jo, Gugrae; Lee, Ki-yong; Kong, Hyang-Shik; Kwon, Sin
2015-05-01
For high-quality flexible devices from printing processes based on Roll-to-Roll (R2R) systems, overlay alignment during the patterning of each functional layer poses a major challenge. The reason is because flexible substrates have a relatively low stiffness compared with rigid substrates, and they are easily deformed during web handling in the R2R system. To achieve a high overlay accuracy for a flexible substrate, it is important not only to develop web handling modules (such as web guiding, tension control, winding, and unwinding) and a precise printing tool but also to control the synchronization of each unit in the total system. A R2R web handling system and reverse offset printing process were developed in this work, and an overlay between the 1st and 2nd layers of ±5μm on a 500 mm-wide film was achieved at a σ level of 2.4 and 2.8 (x and y directions, respectively) in a continuous R2R printing process. This paper presents the components and mechanisms used in reverse offset printing based on a R2R system and the printing results including positioning accuracy and overlay alignment accuracy.
NASA Astrophysics Data System (ADS)
Hatzenbuhler, Chelsea; Kelly, John R.; Martinson, John; Okum, Sara; Pilgrim, Erik
2017-04-01
High-throughput DNA metabarcoding has gained recognition as a potentially powerful tool for biomonitoring, including early detection of aquatic invasive species (AIS). DNA based techniques are advancing, but our understanding of the limits to detection for metabarcoding complex samples is inadequate. For detecting AIS at an early stage of invasion when the species is rare, accuracy at low detection limits is key. To evaluate the utility of metabarcoding in future fish community monitoring programs, we conducted several experiments to determine the sensitivity and accuracy of routine metabarcoding methods. Experimental mixes used larval fish tissue from multiple “common” species spiked with varying proportions of tissue from an additional “rare” species. Pyrosequencing of genetic marker, COI (cytochrome c oxidase subunit I) and subsequent sequence data analysis provided experimental evidence of low-level detection of the target “rare” species at biomass percentages as low as 0.02% of total sample biomass. Limits to detection varied interspecifically and were susceptible to amplification bias. Moreover, results showed some data processing methods can skew sequence-based biodiversity measurements from corresponding relative biomass abundances and increase false absences. We suggest caution in interpreting presence/absence and relative abundance in larval fish assemblages until metabarcoding methods are optimized for accuracy and precision.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.
Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments
Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843
High-accuracy deep-UV Ramsey-comb spectroscopy in krypton
NASA Astrophysics Data System (ADS)
Galtier, Sandrine; Altmann, Robert K.; Dreissen, Laura S.; Eikema, Kjeld S. E.
2017-01-01
In this paper, we present a detailed account of the first precision Ramsey-comb spectroscopy in the deep UV. We excite krypton in an atomic beam using pairs of frequency-comb laser pulses that have been amplified to the millijoule level and upconverted through frequency doubling in BBO crystals. The resulting phase-coherent deep-UV pulses at 212.55 nm are used in the Ramsey-comb method to excite the two-photon 4p^6 → 4p^5 5p [1/2 ]_0 transition. For the {}^{84}Kr isotope, we find a transition frequency of 2829833101679(103) kHz. The fractional accuracy of 3.7 × 10^{-11} is 34 times better than previous measurements, and also the isotope shifts are measured with improved accuracy. This demonstration shows the potential of Ramsey-comb excitation for precision spectroscopy at short wavelengths.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorensen, J; Duran, C; Stingo, F
Purpose: To characterize the effect of virtual monochromatic reconstructions on several commonly used texture analysis features in DECT of the chest. Further, to assess the effect of monochromatic energy levels on the ability of these textural features to identify tissue types. Methods: 20 consecutive patients underwent chest CTs for evaluation of lung nodules using Siemens Somatom Definition Flash DECT. Virtual monochromatic images were constructed at 10keV intervals from 40–190keV. For each patient, an ROI delineated the lesion under investigation, and cylindrical ROI’s were placed within 5 different healthy tissues (blood, fat, muscle, lung, and liver). Several histogram- and Grey Levelmore » Cooccurrence Matrix (GLCM)-based texture features were then evaluated in each ROI at each energy level. As a means of validation, these feature values were then used in a random forest classifier to attempt to identify the tissue types present within each ROI. Their predictive accuracy at each energy level was recorded. Results: All textural features changed considerably with virtual monochromatic energy, particularly below 70keV. Most features exhibited a global minimum or maximum around 80keV, and while feature values changed with energy above this, patient ranking was generally unaffected. As expected, blood demonstrated the lowest inter-patient variability, for all features, while lung lesions (encompassing many different pathologies) exhibited the highest. The accuracy of these features in identifying tissues (76% accuracy) was highest at 80keV, but no clear relationship between energy and classification accuracy was found. Two common misclassifications (blood vs liver and muscle vs fat) accounted for the majority (24 of the 28) errors observed. Conclusion: All textural features were highly dependent on virtual monochromatic energy level, especially below 80keV, and were more stable above this energy. However, in a random forest model, these commonly used features were able to reliably differentiate between most tissues types regardless of energy level. Dr Godoy has received a dual-energy CT research grant from Siemens Healthcare. That grant did not directly fund this research.« less
NASA Technical Reports Server (NTRS)
Chesters, D.; Uccellini, L.; Robinson, W.
1982-01-01
A series of high-resolution water vapor fields were derived from the 11 and 12 micron channels of the VISSR Atmospheric Sounder (VAS) on GOES-5. The low-level tropospheric moisture content was separated from the surface and atmospheric radiances by using the differential adsorption across the 'split window' along with the average air temperature from imbedded radiosondes. Fields of precipitable water are presented in a time sequence of five false color images taken over the United States at 3-hour intervals. Vivid subsynoptic and mesoscale patterns evolve at 15 km horizontal resolution over the 12-hour observing period. Convective cloud formations develop from several areas of enhanced low-level water vapor, especially where the vertical water vapor gradient relatively strong. Independent verification at radiosonde sites indicates fairly good absolute accuracy, and the spatial and temporal continuity of the water vapor features indicates very good relative accuracy. Residual errors are dominated by radiometer noise and unresolved clouds.
Tan, Xiao Wei; Zheng, Qishi; Shi, Luming; Gao, Fei; Allen, John Carson; Coenen, Adriaan; Baumann, Stefan; Schoepf, U Joseph; Kassab, Ghassan S; Lim, Soo Teik; Wong, Aaron Sung Lung; Tan, Jack Wei Chieh; Yeo, Khung Keong; Chin, Chee Tang; Ho, Kay Woon; Tan, Swee Yaw; Chua, Terrance Siang Jin; Chan, Edwin Shih Yen; Tan, Ru San; Zhong, Liang
2017-06-01
To evaluate the combined diagnostic accuracy of coronary computed tomography angiography (CCTA) and computed tomography derived fractional flow reserve (FFRct) in patients with suspected or known coronary artery disease (CAD). PubMed, The Cochrane library, Embase and OpenGray were searched to identify studies comparing diagnostic accuracy of CCTA and FFRct. Diagnostic test measurements of FFRct were either extracted directly from the published papers or calculated from provided information. Bivariate models were conducted to synthesize the diagnostic performance of combined CCTA and FFRct at both "per-vessel" and "per-patient" levels. 7 articles were included for analysis. The combined diagnostic outcomes from "both positive" strategy, i.e. a subject was considered as "positive" only when both CCTA and FFRct were "positive", demonstrated relative high specificity (per-vessel: 0.91; per-patient: 0.81), high positive likelihood ratio (LR+, per-vessel: 7.93; per-patient: 4.26), high negative likelihood ratio (LR-, per-vessel: 0.30; per patient: 0.24) and high accuracy (per-vessel: 0.91; per-patient: 0.81) while "either positive" strategy, i.e. a subject was considered as "positive" when either CCTA or FFRct was "positive", demonstrated relative high sensitivity (per-vessel: 0.97; per-patient: 0.98), low LR+ (per-vessel: 1.50; per-patient: 1.17), low LR- (per-vessel: 0.07; per-patient: 0.09) and low accuracy (per-vessel: 0.57; per-patient: 0.54). "Both positive" strategy showed better diagnostic performance to rule in patients with non-significant stenosis compared to "either positive" strategy, as it efficiently reduces the proportion of testing false positive subjects. Copyright © 2017 Elsevier B.V. All rights reserved.
Accuracy and Measurement Error of the Medial Clear Space of the Ankle.
Metitiri, Ogheneochuko; Ghorbanhoseini, Mohammad; Zurakowski, David; Hochman, Mary G; Nazarian, Ara; Kwon, John Y
2017-04-01
Measurement of the medial clear space (MCS) is commonly used to assess deltoid ligament competency and mortise stability when managing ankle fractures. Lacking knowledge of the true anatomic width measured, previous studies have been unable to measure accuracy of measurement. The purpose of this study was to determine MCS measurement error and accuracy and any influencing factors. Using 3 normal transtibial ankle cadaver specimens, deltoid and syndesmotic ligaments were transected and the mortise widened and affixed at a width of 6 mm (specimen 1) and 4 mm (specimen 2). The mortise was left intact in specimen 3. Radiographs were obtained of each cadaver at varying degrees of rotation. Radiographs were randomized, and providers measured the MCS using a standardized technique. Lack of accuracy as well as lack of precision in measurement of the medial clear space compared to a known anatomic value was present for all 3 specimens tested. There were no significant differences in mean delta with regard to level of training for specimens 1 and 2; however, with specimen 3, staff physicians showed increased measurement accuracy compared with trainees. Accuracy and precision of MCS measurements are poor. Provider experience did not appear to influence accuracy and precision of measurements for the displaced mortise. This high degree of measurement error and lack of precision should be considered when deciding treatment options based on MCS measurements.
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Effects of changes in size, speed and distance on the perception of curved 3D trajectories
Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.
2012-01-01
Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204
Comparing Features for Classification of MEG Responses to Motor Imagery.
Halme, Hanna-Leena; Parkkonen, Lauri
2016-01-01
Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.
2011-01-01
Background When a specimen belongs to a species not yet represented in DNA barcode reference libraries there is disagreement over the effectiveness of using sequence comparisons to assign the query accurately to a higher taxon. Library completeness and the assignment criteria used have been proposed as critical factors affecting the accuracy of such assignments but have not been thoroughly investigated. We explored the accuracy of assignments to genus, tribe and subfamily in the Sphingidae, using the almost complete global DNA barcode reference library (1095 species) available for this family. Costa Rican sphingids (118 species), a well-documented, diverse subset of the family, with each of the tribes and subfamilies represented were used as queries. We simulated libraries with different levels of completeness (10-100% of the available species), and recorded assignments (positive or ambiguous) and their accuracy (true or false) under six criteria. Results A liberal tree-based criterion assigned 83% of queries accurately to genus, 74% to tribe and 90% to subfamily, compared to a strict tree-based criterion, which assigned 75% of queries accurately to genus, 66% to tribe and 84% to subfamily, with a library containing 100% of available species (but excluding the species of the query). The greater number of true positives delivered by more relaxed criteria was negatively balanced by the occurrence of more false positives. This effect was most sharply observed with libraries of the lowest completeness where, for example at the genus level, 32% of assignments were false positives with the liberal criterion versus < 1% when using the strict. We observed little difference (< 8% using the liberal criterion) however, in the overall accuracy of the assignments between the lowest and highest levels of library completeness at the tribe and subfamily level. Conclusions Our results suggest that when using a strict tree-based criterion for higher taxon assignment with DNA barcodes, the likelihood of assigning a query a genus name incorrectly is very low, if a genus name is provided it has a high likelihood of being accurate, and if no genus match is available the query can nevertheless be assigned to a subfamily with high accuracy regardless of library completeness. DNA barcoding often correctly assigned sphingid moths to higher taxa when species matches were unavailable, suggesting that barcode reference libraries can be useful for higher taxon assignments long before they achieve complete species coverage. PMID:21806794
Incorporating Duration Information in Activity Recognition
NASA Astrophysics Data System (ADS)
Chaurasia, Priyanka; Scotney, Bryan; McClean, Sally; Zhang, Shuai; Nugent, Chris
Activity recognition has become a key issue in smart home environments. The problem involves learning high level activities from low level sensor data. Activity recognition can depend on several variables; one such variable is duration of engagement with sensorised items or duration of intervals between sensor activations that can provide useful information about personal behaviour. In this paper a probabilistic learning algorithm is proposed that incorporates episode, time and duration information to determine inhabitant identity and the activity being undertaken from low level sensor data. Our results verify that incorporating duration information consistently improves the accuracy.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
Inventory and analysis of rangeland resources of the state land block on Parker Mountain, Utah
NASA Technical Reports Server (NTRS)
Jaynes, R. A. (Principal Investigator)
1983-01-01
High altitude color infrared (CIR) photography was interpreted to provide an 1:24,000 overlay to U.S.G.S. topographic maps. The inventory and analysis of rangeland resources was augmented by the digital analysis of LANDSAT MSS data. Available geology, soils, and precipitation maps were used to sort out areas of confusion on the CIR photography. The map overlay from photo interpretation was also prepared with reference to print maps developed from LANDSAT MSS data. The resulting map overlay has a high degree of interpretive and spatial accuracy. An unacceptable level of confusion between the several sagebrush types in the MSS mapping was largely corrected by introducing ancillary data. Boundaries from geology, soils, and precipitation maps, as well as field observations, were digitized and pixel classes were adjusted according to the location of pixels with particular spectral signatures with respect to such boundaries. The resulting map, with six major cover classes, has an overall accuracy of 89%. Overall accuracy was 74% when these six classes were expanded to 20 classes.
NASA Astrophysics Data System (ADS)
Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude
2010-02-01
Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.
Absolute flux density calibrations of radio sources: 2.3 GHz
NASA Technical Reports Server (NTRS)
Freiley, A. J.; Batelaan, P. D.; Bathker, D. A.
1977-01-01
A detailed description of a NASA/JPL Deep Space Network program to improve S-band gain calibrations of large aperture antennas is reported. The program is considered unique in at least three ways; first, absolute gain calibrations of high quality suppressed-sidelobe dual mode horns first provide a high accuracy foundation to the foundation to the program. Second, a very careful transfer calibration technique using an artificial far-field coherent-wave source was used to accurately obtain the gain of one large (26 m) aperture. Third, using the calibrated large aperture directly, the absolute flux density of five selected galactic and extragalactic natural radio sources was determined with an absolute accuracy better than 2 percent, now quoted at the familiar 1 sigma confidence level. The follow-on considerations to apply these results to an operational network of ground antennas are discussed. It is concluded that absolute gain accuracies within + or - 0.30 to 0.40 db are possible, depending primarily on the repeatability (scatter) in the field data from Deep Space Network user stations.
Teachers' Judgement Accuracy Concerning CEFR Levels of Prospective University Students
ERIC Educational Resources Information Center
Fleckenstein, Johanna; Leucht, Michael; Köller, Olaf
2018-01-01
Most English-medium programs at European universities require prospective students to take standardised tests for English as a foreign language (EFL) to be admitted. However, there are contexts in which individual teachers' judgements serve the same function, thus having high-stakes consequences for the higher education entrance of their students.…
Geophysica MTP observations during the EUPLEX campaign
NASA Technical Reports Server (NTRS)
Mahoney, M. J.; Gary, Bruce
2003-01-01
The Jet Propulsion Laboratory (JPL) Microwave Temperature Profiler (MTP) was the first United States instrument to fly on the Russian Geophysica high-altitude research aircraft. Careful comparison of MTP measurements with radiosondes launched near the Geophysica flight track has allowed us to establish the flight level temperature to an accuracy of 0.2K.
A study of noise metric and tone correction accuracy
NASA Technical Reports Server (NTRS)
Sullivan, B. M.; Mabry, J. E.
1982-01-01
Methods currently used to measure human response to aircraft flyover noise were investigated. Response to high level aircraft noise usually experienced outdoors was obtained. Response to aircraft flyover noise typical of indoor exposure was also investigated. It was concluded that current methods for evaluating response to aircraft flyover are more accurate for outdoor noise.
DOT National Transportation Integrated Search
2017-11-15
Microsimulation modeling is a tool used by practitioners and researchers to predict and evaluate the flow of traffic on real transportation networks. These models are used in practice to inform decisions and thus must reflect a high level of accuracy...
NASA Technical Reports Server (NTRS)
Pluhowski, E. J. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Land use data derived from high altitude photography and satellite imagery were studied for 49 basins in Delaware, and eastern Maryland and Virginia. Applying multiple regression techniques to a network of gaging stations monitoring runoff from 39 of the basins, demonstrated that land use data from high altitude photography provided an effective means of significantly improving estimates of stream flow. Forty stream flow characteristic equations for incorporating remotely sensed land use information, were compared with a control set of equations using map derived land cover. Significant improvement was detected in six equations where level 1 data was added and in five equations where level 2 information was utilized. Only four equations were improved significantly using land use data derived from LANDSAT imagery. Significant losses in accuracy due to the use of remotely sensed land use information were detected only in estimates of flood peaks. Losses in accuracy for flood peaks were probably due to land cover changes associated with temporal differences among the primary land use data sources.
Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI
Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer
2016-01-01
There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673
Torres, Jorge; James, Andrew R.; Alimi, Marjan; Tsiouris, Apostolos John; Geannette, Christian; Härtl, Roger
2012-01-01
Purpose The aim of this study was to assess the impact of 3-D navigation for pedicle screw placement accuracy in minimally invasive transverse lumbar interbody fusion (MIS-TLIF). Methods A retrospective review of 52 patients who had MIS-TLIF assisted with 3D navigation is presented. Clinical outcomes were assessed with the Oswestry Disability Index (ODI), Visual Analog Scales (VAS), and MacNab scores. Radiographic outcomes were assessed using X-rays and thin-slice computed tomography. Result The mean age was 56.5 years, and 172 screws were implanted with 16 pedicle breaches (91.0% accuracy rate). Radiographic fusion rate at a mean follow-up of 15.6 months was 87.23%. No revision surgeries were required. The mean improvement in the VAS back pain, VAS leg pain, and ODI at 11.3 months follow-up was 4.3, 4.5, and 26.8 points, respectively. At last follow-up the mean postoperative disc height gain was 4.92 mm and the mean postoperative disc angle gain was 2.79 degrees. At L5–S1 level, there was a significant correlation between a greater disc space height gain and a lower VAS leg score. Conclusion Our data support that application of 3-D navigation in MIS-TLIF is associated with a high level of accuracy in the pedicle screw placement. PMID:24353961
NASA Astrophysics Data System (ADS)
Ertmer, David Joseph
1994-01-01
The effectiveness of vowel production training which incorporated direct instruction in combination with spectrographic models and feedback was assessed for two children who exhibited profound hearing impairment. A multiple-baseline design across behaviors, with replication across subjects was implemented to determine if vowel production accuracy improved following the introduction of treatment. Listener judgments of vowel correctness were obtained during the baseline, training, and follow-up phases of the study. Data were analyzed through visual inspection of changes in levels of accuracy, changes in trends of accuracy, and changes in variability of accuracy within and across phases. One subject showed significant improvement of all three trained vowel targets; the second subject for the first trained target only (Kolmogorov-Smirnov Two Sample Test). Performance trends during training sessions suggest that continued treatment would have resulted in further improvement for both subjects. Vowel duration, fundamental frequency, and the frequency locations of the first and second formants were measured before and after training. Acoustic analysis revealed highly individualized changes in the frequency locations of F1 and F2. Vowels which received the most training were maintained at higher levels than those which were introduced later in training, Some generalization of practiced vowel targets to untrained words was observed in both subjects. A bias towards judging productions as "correct" was observed for both subjects during self-evaluation tasks using spectrographic feedback.
Wong, Wang I
2017-06-01
Spatial abilities are pertinent to mathematical competence, but evidence of the space-math link has largely been confined to older samples and intrinsic spatial abilities (e.g., mental transformation). The roles of gender and affective factors are also unclear. This study examined the correlations between counting ability, mental transformation, and targeting accuracy in 182 Hong Kong preschoolers, and whether these relationships were weaker at higher spatial anxiety levels. Both spatial abilities related with counting similarly for boys and girls. Targeting accuracy also mediated the male advantage in counting. Interestingly, spatial anxiety moderated the space-math links, but differently for boys and girls. For boys, spatial abilities were irrelevant to counting at high anxiety levels; for girls, the role of anxiety on the space-math link is less clear. Results extend the evidence base of the space-math link to include an extrinsic spatial ability (targeting accuracy) and have implications for intervention programmes. Statement of contribution What is already known on this subject? Much evidence of a space-math link in adolescent and adult samples and for intrinsic spatial abilities. What does this study add? Extended the space-math link to include both intrinsic and extrinsic spatial abilities in a preschool sample. Showed how spatial anxiety moderated the space-math link differently for boys and girls. © 2016 The British Psychological Society.
Zewdie, Getie A.; Cox, Dennis D.; Neely Atkinson, E.; Cantor, Scott B.; MacAulay, Calum; Davies, Kalatu; Adewole, Isaac; Buys, Timon P. H.; Follen, Michele
2012-01-01
Abstract. Optical spectroscopy has been proposed as an accurate and low-cost alternative for detection of cervical intraepithelial neoplasia. We previously published an algorithm using optical spectroscopy as an adjunct to colposcopy and found good accuracy (sensitivity=1.00 [95% confidence interval (CI)=0.92 to 1.00], specificity=0.71 [95% CI=0.62 to 0.79]). Those results used measurements taken by expert colposcopists as well as the colposcopy diagnosis. In this study, we trained and tested an algorithm for the detection of cervical intraepithelial neoplasia (i.e., identifying those patients who had histology reading CIN 2 or worse) that did not include the colposcopic diagnosis. Furthermore, we explored the interaction between spectroscopy and colposcopy, examining the importance of probe placement expertise. The colposcopic diagnosis-independent spectroscopy algorithm had a sensitivity of 0.98 (95% CI=0.89 to 1.00) and a specificity of 0.62 (95% CI=0.52 to 0.71). The difference in the partial area under the ROC curves between spectroscopy with and without the colposcopic diagnosis was statistically significant at the patient level (p=0.05) but not the site level (p=0.13). The results suggest that the device has high accuracy over a wide range of provider accuracy and hence could plausibly be implemented by providers with limited training. PMID:22559693
Oasis: A high-level/high-performance open source Navier-Stokes solver
NASA Astrophysics Data System (ADS)
Mortensen, Mikael; Valen-Sendstad, Kristian
2015-03-01
Oasis is a high-level/high-performance finite element Navier-Stokes solver written from scratch in Python using building blocks from the FEniCS project (fenicsproject.org). The solver is unstructured and targets large-scale applications in complex geometries on massively parallel clusters. Oasis utilizes MPI and interfaces, through FEniCS, to the linear algebra backend PETSc. Oasis advocates a high-level, programmable user interface through the creation of highly flexible Python modules for new problems. Through the high-level Python interface the user is placed in complete control of every aspect of the solver. A version of the solver, that is using piecewise linear elements for both velocity and pressure, is shown to reproduce very well the classical, spectral, turbulent channel simulations of Moser et al. (1999). The computational speed is strongly dominated by the iterative solvers provided by the linear algebra backend, which is arguably the best performance any similar implicit solver using PETSc may hope for. Higher order accuracy is also demonstrated and new solvers may be easily added within the same framework.
Stream Splitting in Support of Intrusion Detection
2003-06-01
increased. Every computer on the Internet has no need to see the traffic of every other computer on the Internet. Indeed if this was so, nothing would get ...distinguishes the stream splitter from other network analysis tools. B. HIGH LEVEL DESIGN To get the desired level of performance, a multi-threaded...of greater concern than added accuracy of a Bayesian model. This is a case where close is good enough . b. PassiveSensors Though similar to active
Semantic classification of business images
NASA Astrophysics Data System (ADS)
Erol, Berna; Hull, Jonathan J.
2006-01-01
Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.
Taxonomic resolutions based on 18S rRNA genes: a case study of subclass copepoda.
Wu, Shu; Xiong, Jie; Yu, Yuhe
2015-01-01
Biodiversity studies are commonly conducted using 18S rRNA genes. In this study, we compared the inter-species divergence of variable regions (V1-9) within the copepod 18S rRNA gene, and tested their taxonomic resolutions at different taxonomic levels. Our results indicate that the 18S rRNA gene is a good molecular marker for the study of copepod biodiversity, and our conclusions are as follows: 1) 18S rRNA genes are highly conserved intra-species (intra-species similarities are close to 100%); and could aid in species-level analyses, but with some limitations; 2) nearly-whole-length sequences and some partial regions (around V2, V4, and V9) of the 18S rRNA gene can be used to discriminate between samples at both the family and order levels (with a success rate of about 80%); 3) compared with other regions, V9 has a higher resolution at the genus level (with an identification success rate of about 80%); and 4) V7 is most divergent in length, and would be a good candidate marker for the phylogenetic study of Acartia species. This study also evaluated the correlation between similarity thresholds and the accuracy of using nuclear 18S rRNA genes for the classification of organisms in the subclass Copepoda. We suggest that sample identification accuracy should be considered when a molecular sequence divergence threshold is used for taxonomic identification, and that the lowest similarity threshold should be determined based on a pre-designated level of acceptable accuracy.
Taxonomic Resolutions Based on 18S rRNA Genes: A Case Study of Subclass Copepoda
Wu, Shu; Xiong, Jie; Yu, Yuhe
2015-01-01
Biodiversity studies are commonly conducted using 18S rRNA genes. In this study, we compared the inter-species divergence of variable regions (V1–9) within the copepod 18S rRNA gene, and tested their taxonomic resolutions at different taxonomic levels. Our results indicate that the 18S rRNA gene is a good molecular marker for the study of copepod biodiversity, and our conclusions are as follows: 1) 18S rRNA genes are highly conserved intra-species (intra-species similarities are close to 100%); and could aid in species-level analyses, but with some limitations; 2) nearly-whole-length sequences and some partial regions (around V2, V4, and V9) of the 18S rRNA gene can be used to discriminate between samples at both the family and order levels (with a success rate of about 80%); 3) compared with other regions, V9 has a higher resolution at the genus level (with an identification success rate of about 80%); and 4) V7 is most divergent in length, and would be a good candidate marker for the phylogenetic study of Acartia species. This study also evaluated the correlation between similarity thresholds and the accuracy of using nuclear 18S rRNA genes for the classification of organisms in the subclass Copepoda. We suggest that sample identification accuracy should be considered when a molecular sequence divergence threshold is used for taxonomic identification, and that the lowest similarity threshold should be determined based on a pre-designated level of acceptable accuracy. PMID:26107258
NASA Astrophysics Data System (ADS)
Cong, Xiaoying; Balss, Ulrich; Eineder, Michael
2015-04-01
The atmospheric delay due to vertical stratification, the so-called stratified atmospheric delay, has a great impact on both interferometric and absolute range measurements. In our current researches [1][2][3], centimeter-range accuracy has been proven based on Corner Reflector (CR) based measurements by applying atmospheric delay correction using the Zenith Path Delay (ZPD) corrections derived from nearby Global Positioning System (GPS) stations. For a global usage, an effective method has been introduced to estimate the stratified delay based on global 4-dimensional Numerical Weather Prediction (NWP) products: the direct integration method [4][5]. Two products, ERA-Interim and operational data, provided by European Centre for Medium-Range Weather Forecast (ECMWF) are used to integrate the stratified delay. In order to access the integration accuracy, a validation approach is investigated based on ZPD derived from six permanent GPS stations located in different meteorological conditions. Range accuracy at centimeter level is demonstrated using both ECMWF products. Further experiments have been carried out in order to determine the best interpolation method by analyzing the temporal and spatial correlation of atmospheric delay using both ECMWF and GPS ZPD. Finally, the integrated atmospheric delays in slant direction (Slant Path Delay, SPD) have been applied instead of the GPS ZPD for CR experiments at three different test sites with more than 200 TerraSAR-X High Resolution SpotLight (HRSL) images. The delay accuracy is around 1-3 cm depending on the location of test site due to the local water vapor variation and the acquisition time/date. [1] Eineder M., Minet C., Steigenberger P., et al. Imaging geodesy - Toward centimeter-level ranging accuracy with TerraSAR-X. Geoscience and Remote Sensing, IEEE Transactions on, 2011, 49(2): 661-671. [2] Balss U., Gisinger C., Cong X. Y., et al. Precise Measurements on the Absolute Localization Accuracy of TerraSAR-X on the Base of Far-Distributed Test Sites; EUSAR 2014; 10th European Conference on Synthetic Aperture Radar; Proceedings of. VDE, 2014: 1-4. [3] Eineder M., Balss U., Gisinger C., et al. TerraSAR-X pixel localization accuracy: Approaching the centimeter level, Geoscience and Remote Sensing Symposium (IGARSS), 2014 IEEE International. IEEE, 2014: 2669-2670. [4] Cong X., Balss U., Eineder M., et al. Imaging Geodesy -- Centimeter-Level Ranging Accuracy With TerraSAR-X: An Update. Geoscience and Remote Sensing Letters, IEEE, 2012, 9(5): 948-952. [5] Cong X. SAR Interferometry for Volcano Monitoring: 3D-PSI Analysis and Mitigation of Atmospheric Refractivity. München, Technische Universität München, Dissertation, 2014.
Mavandadi, Sam; Feng, Steve; Yu, Frank; Dimitrov, Stoyan; Nielsen-Saines, Karin; Prescott, William R; Ozcan, Aydogan
2012-01-01
We propose a methodology for digitally fusing diagnostic decisions made by multiple medical experts in order to improve accuracy of diagnosis. Toward this goal, we report an experimental study involving nine experts, where each one was given more than 8,000 digital microscopic images of individual human red blood cells and asked to identify malaria infected cells. The results of this experiment reveal that even highly trained medical experts are not always self-consistent in their diagnostic decisions and that there exists a fair level of disagreement among experts, even for binary decisions (i.e., infected vs. uninfected). To tackle this general medical diagnosis problem, we propose a probabilistic algorithm to fuse the decisions made by trained medical experts to robustly achieve higher levels of accuracy when compared to individual experts making such decisions. By modelling the decisions of experts as a three component mixture model and solving for the underlying parameters using the Expectation Maximisation algorithm, we demonstrate the efficacy of our approach which significantly improves the overall diagnostic accuracy of malaria infected cells. Additionally, we present a mathematical framework for performing 'slide-level' diagnosis by using individual 'cell-level' diagnosis data, shedding more light on the statistical rules that should govern the routine practice in examination of e.g., thin blood smear samples. This framework could be generalized for various other tele-pathology needs, and can be used by trained experts within an efficient tele-medicine platform.
Liu, Xiao-jing; Li, Qian-qian; Pang, Yuan-jie; Tian, Kai-yue; Xie, Zheng; Li, Zi-li
2015-06-01
As computer-assisted surgical design becomes increasingly popular in maxillofacial surgery, recording patients' natural head position (NHP) and reproducing it in the virtual environment are vital for preoperative design and postoperative evaluation. Our objective was to test the repeatability and accuracy of recording NHP using a multicamera system and a laser level. A laser level was used to project a horizontal reference line on a physical model, and a 3-dimensional image was obtained using a multicamera system. In surgical simulation software, the recorded NHP was reproduced in the virtual head position by registering the coordinate axes with the horizontal reference on both the frontal and lateral views. The repeatability and accuracy of the method were assessed using a gyroscopic procedure as the gold standard. The interclass correlation coefficients for pitch and roll were 0.982 (0.966, 0.991) and 0.995 (0.992, 0.998), respectively, indicating a high degree of repeatability. Regarding accuracy, the lack of agreement in orientation between the new method and the gold standard was within the ranges for pitch (-0.69°, 1.71°) and for roll (-0.92°, 1.20°); these have no clinical significance. This method of recording and reproducing NHP with a multicamera system and a laser level is repeatable, accurate, and clinically feasible. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Error-proofing test system of industrial components based on image processing
NASA Astrophysics Data System (ADS)
Huang, Ying; Huang, Tao
2018-05-01
Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.
A time series modeling approach in risk appraisal of violent and sexual recidivism.
Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E
2010-10-01
For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.
Angel, Lucie; Bastin, Christine; Genon, Sarah; Salmon, Eric; Fay, Séverine; Balteau, Evelyne; Maquet, Pierre; Luxen, André; Isingrini, Michel; Collette, Fabienne
2016-01-15
The current experiment aimed to explore age differences in brain activity associated with successful memory retrieval in older adults with different levels of executive functioning, at different levels of task demand. Memory performance and fMRI activity during a recognition task were compared between a young group and two older groups characterized by a low (old-low group) vs. high (old-high group) level of executive functioning. Participants first encoded pictures, presented once (Hard condition) or twice (Easy condition), and then completed a recognition memory task. Old-low adults had poorer memory performance than the two other groups, which did not differ, in both levels of task demands. In the Easy condition, even though older adults demonstrated reduced activity compared to young adults in several regions, they also showed additional activations in the right superior frontal gyrus and right parietal lobule (positively correlated to memory accuracy) for the old-high group and in the right precuneus (negatively correlated to memory accuracy), right anterior cingulate gyrus and right supramarginal gyrus for the old-low group. In the Hard condition, some regions were also more activated in the young group than in the older groups. Vice versa, old-high participants demonstrated more activity than either the young or the old-low group in the right frontal gyrus, associated with more accurate memory performance, and in the left frontal gyrus. In sum, the present study clearly showed that age differences in the neural correlates of retrieval success were modulated by task difficulty, as suggested by the CRUNCH model, but also by interindividual variability, in particular regarding executive functioning. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vlasov, V. M.; Novikov, A. N.; Novikov, I. A.; Shevtsova, A. G.
2018-03-01
In the environment of highly developed urban agglomerations, one of the main problems arises - inability of the road network to reach a high level of motorization. The introduction of intelligent transport systems allows solving this problem, but the main issue in their implementation remains open: to what extent this or that method of improving the transport network will be effective and whether it is able to solve the problem of vehicle growth especially for the long-term period. The main goal of this work was the development of an approach to forecasting the increase in the intensity of traffic flow for a long-term period using the population and the level of motorization. The developed approach made it possible to determine the projected population and, taking into account the level of motorization, to determine the growth factor of the traffic flow intensity, which allows calculating the intensity value for a long-term period with high accuracy. The analysis of the main methods for predicting the characteristics of the transport stream is performed. The basic values and parameters necessary for their use are established. The analysis of the urban settlement is carried out and the level of motorization characteristic for the given locality is determined. A new approach to predicting the intensity of the traffic flow has been developed, which makes it possible to predict the change in the transport situation in the long term in high accuracy. Calculations of the magnitude of the intensity increase on the basis of the developed forecasting method are made and the errors in the data obtained are determined. The main recommendations on the use of the developed forecasting approach for the long-term functioning of the road network are formulated.
Using MaxCompiler for the high level synthesis of trigger algorithms
NASA Astrophysics Data System (ADS)
Summers, S.; Rose, A.; Sanders, P.
2017-02-01
Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.
Grip form and graphomotor control in preschool children.
Burton, A W; Dancisak, M J
2000-01-01
The purpose of this study was to examine the utility of the grip scale presented by Schneck and Henderson, the effect of grip form on drawing accuracy, and the effect of implement diameter on grip form and drawing accuracy. Sixty boys and girls who were 3, 4, and 5 years of age performed 20 trials of a precision drawing task, 4 trials each with five implements of varying diameters (4.7, 7.9, 11.1, 14.3, and 17.5 mm). First, all 1,200 grips could be coded according to Schneck and Henderson's 10-grip whole-configuration assessment system, but the interrater reliability was lower than expected (.67 proportion of perfect agreement). Second, using Schneck's five-level scoring system, the level of grip significantly affected drawing accuracy, with the highest grip level used most often with the highest accuracy scores and the lowest observed grip level used most often with the lowest accuracy scores. Third, increasing implement diameter led to significantly lower level grips but did not significantly affect accuracy. Therapists are recommended to use Schneck and Henderson's 10-grip scale only for documenting the persons' grips and changes in their grips, but if comparisons between individual persons are desired, then Schneck's five-level scale, which affords greater generalizability, should be used. Further, children with graphomotor performance deficits are not likely to benefit from grip manipulations because such strategies were shown to make better only performance that is already good.
Matsuba, Shinji; Tabuchi, Hitoshi; Ohsugi, Hideharu; Enno, Hiroki; Ishitobi, Naofumi; Masumoto, Hiroki; Kiuchi, Yoshiaki
2018-05-09
To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.
Van de Voorde, Tim; Vlaeminck, Jeroen; Canters, Frank
2008-01-01
Urban growth and its related environmental problems call for sustainable urban management policies to safeguard the quality of urban environments. Vegetation plays an important part in this as it provides ecological, social, health and economic benefits to a city's inhabitants. Remotely sensed data are of great value to monitor urban green and despite the clear advantages of contemporary high resolution images, the benefits of medium resolution data should not be discarded. The objective of this research was to estimate fractional vegetation cover from a Landsat ETM+ image with sub-pixel classification, and to compare accuracies obtained with multiple stepwise regression analysis, linear spectral unmixing and multi-layer perceptrons (MLP) at the level of meaningful urban spatial entities. Despite the small, but nevertheless statistically significant differences at pixel level between the alternative approaches, the spatial pattern of vegetation cover and estimation errors is clearly distinctive at neighbourhood level. At this spatially aggregated level, a simple regression model appears to attain sufficient accuracy. For mapping at a spatially more detailed level, the MLP seems to be the most appropriate choice. Brightness normalisation only appeared to affect the linear models, especially the linear spectral unmixing. PMID:27879914
Rastgarpour, Maryam; Shanbehzadeh, Jamshid
2014-01-01
Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.
Robinson, Charlotte S; Sharp, Patrick
2012-05-01
Blood glucose monitoring systems (BGMS) are used in the hospital environment to manage blood glucose levels in patients at the bedside. The International Organization for Standardization (ISO) 15197:2003 standard is currently used by regulatory bodies as a minimum requirement for the performance of BGMS, specific to self-testing. There are calls for the tightening of accuracy requirements and implementation of a standard specifically for point-of-care (POC) BGMS. The accuracy of six commonly used BGMS was assessed in a clinical setting, with 108 patients' finger stick capillary samples. Using the accuracy criteria from the existing standard and a range of tightened accuracy criteria, system performance was compared. Other contributors to system performance have been measured, including hematocrit sensitivity and meter error rates encountered in the clinical setting. Five of the six BGMS evaluated met current accuracy criteria within the ISO 15197 standard. Only the Optium Xceed system had >95% of all readings within a tightened criteria of ±12.5% from the reference at glucose levels ≥72 mg/dl (4 mmol/liter) and ±9 mg/dl (0.5 mmol/liter) at glucose levels <72 mg/dl (4 mmol/liter). The Nova StatStrip Xpress had the greatest number of error messages observed; Optium Xceed the least. OneTouch Ultra2, Nova StatStrip Xpress, Accu-Chek Performa, and Contour TS products were all significantly influenced by blood hematocrit levels. From evidence obtained during this clinical evaluation, the Optium Xceed system is most likely to meet future anticipated accuracy standards for POC BGMS. In this clinical study, the results demonstrated the Optium Xceed product to have the highest level of accuracy, to have the lowest occurrence of error messages, and to be least influenced by blood hematocrit levels. © 2012 Diabetes Technology Society.
van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed
2017-01-01
Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately educated and adequate staffing levels should be maintained. (7) For reporting purposes, uncertainties should be presented. (8) Manufacturers should provide training on all equipment. (9) Research should aid in improving the accuracy of radiotherapy. Some example research projects are suggested.
NASA Technical Reports Server (NTRS)
Welch, R. M.; Sengupta, S. K.; Chen, D. W.
1988-01-01
Stratocumulus, cumulus, and cirrus clouds were identified on the basis of cloud textural features which were derived from a single high-resolution Landsat MSS NIR channel using a stepwise linear discriminant analysis. It is shown that, using this method, it is possible to distinguish high cirrus clouds from low clouds with high accuracy on the basis of spatial brightness patterns. The largest probability of misclassification is associated with confusion between the stratocumulus breakup regions and the fair-weather cumulus.
Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo
2017-01-01
Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.
Haptic perception accuracy depending on self-produced movement.
Park, Chulwook; Kim, Seonjin
2014-01-01
This study measured whether self-produced movement influences haptic perception ability (experiment 1) as well as the factors associated with levels of influence (experiment 2) in racket sports. For experiment 1, the haptic perception accuracy levels of five male table tennis experts and five male novices were examined under two different conditions (no movement vs. movement). For experiment 2, the haptic afferent subsystems of five male table tennis experts and five male novices were investigated in only the self-produced movement-coupled condition. Inferential statistics (ANOVA, t-test) and custom-made devices (shock & vibration sensor, Qualisys Track Manager) of the data were used to determine the haptic perception accuracy (experiment 1, experiment 2) and its association with expertise. The results of this research show that expert-level players acquire higher accuracy with less variability (racket vibration and angle) than novice-level players, especially in their self-produced movement coupled performances. The important finding from this result is that, in terms of accuracy, the skill-associated differences were enlarged during self-produced movement. To explain the origin of this difference between experts and novices, the functional variability of haptic afferent subsystems can serve as a reference. These two factors (self-produced accuracy and the variability of haptic features) as investigated in this study would be useful criteria for educators in racket sports and suggest a broader hypothesis for further research into the effects of the haptic accuracy related to variability.
ERIC Educational Resources Information Center
King, Brian; Radley, Keith C.; Jenson, William R.; O'Neill, Robert E.
2017-01-01
The present study tested the efficacy of the On-Task in a Box program for increasing on-task behavior and academic accuracy of highly off-task students. Six students in 2nd and 3rd grades were identified by their classroom teacher as highly off-task. Following identification, the students participated in the On-Task in a Box intervention. Results…
Researches on High Accuracy Prediction Methods of Earth Orientation Parameters
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2015-09-01
The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients respectively, which are used to improve/re-evaluate the AR model. Comparing to the single AR model, the AR+Kalman method performs better in the prediction of UT1-UTC and ΔLOD, and the improvement in the prediction of the polar motion is significant. (3) Following the successful Earth Orientation Parameter Prediction Comparison Campaign (EOP PCC), the Earth Orientation Parameter Combination of Prediction Pilot Project (EOPC PPP) was sponsored in 2010. As one of the participants from China, we update and submit the short- and medium-term (1 to 90 days) EOP predictions every day. From the current comparative statistics, our prediction accuracy is on the medium international level. We will carry out more innovative researches to improve the EOP forecast accuracy and enhance our level in EOP forecast.
Han, Bingqing; Ge, Menglei; Zhao, Haijian; Yan, Ying; Zeng, Jie; Zhang, Tianjiao; Zhou, Weiyan; Zhang, Jiangtao; Wang, Jing; Zhang, Chuanbao
2017-11-27
Serum calcium level is an important clinical index that reflects pathophysiological states. However, detection accuracy in laboratory tests is not ideal; as such, a high accuracy method is needed. We developed a reference method for measuring serum calcium levels by isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS), using 42Ca as the enriched isotope. Serum was digested with 69% ultrapure nitric acid and diluted to a suitable concentration. The 44Ca/42Ca ratio was detected in H2 mode; spike concentration was calibrated by reverse IDMS using standard reference material (SRM) 3109a, and sample concentration was measured by a bracketing procedure. We compared the performance of ID ICP-MS with those of three other reference methods in China using the same serum and aqueous samples. The relative expanded uncertainty of the sample concentration was 0.414% (k=2). The range of repeatability (within-run imprecision), intermediate imprecision (between-run imprecision), and intra-laboratory imprecision were 0.12%-0.19%, 0.07%-0.09%, and 0.16%-0.17%, respectively, for two of the serum samples. SRM909bI, SRM909bII, SRM909c, and GBW09152 were found to be within the certified value interval, with mean relative bias values of 0.29%, -0.02%, 0.10%, and -0.19%, respectively. The range of recovery was 99.87%-100.37%. Results obtained by ID ICP-MS showed a better accuracy than and were highly correlated with those of other reference methods. ID ICP-MS is a simple and accurate candidate reference method for serum calcium measurement and can be used to establish and improve serum calcium reference system in China.
NASA Astrophysics Data System (ADS)
Jizhi, Liu; Xingbi, Chen
2009-12-01
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.
NASA Astrophysics Data System (ADS)
Diesing, Markus; Green, Sophie L.; Stephens, David; Lark, R. Murray; Stewart, Heather A.; Dove, Dayton
2014-08-01
Marine spatial planning and conservation need underpinning with sufficiently detailed and accurate seabed substrate and habitat maps. Although multibeam echosounders enable us to map the seabed with high resolution and spatial accuracy, there is still a lack of fit-for-purpose seabed maps. This is due to the high costs involved in carrying out systematic seabed mapping programmes and the fact that the development of validated, repeatable, quantitative and objective methods of swath acoustic data interpretation is still in its infancy. We compared a wide spectrum of approaches including manual interpretation, geostatistics, object-based image analysis and machine-learning to gain further insights into the accuracy and comparability of acoustic data interpretation approaches based on multibeam echosounder data (bathymetry, backscatter and derivatives) and seabed samples with the aim to derive seabed substrate maps. Sample data were split into a training and validation data set to allow us to carry out an accuracy assessment. Overall thematic classification accuracy ranged from 67% to 76% and Cohen's kappa varied between 0.34 and 0.52. However, these differences were not statistically significant at the 5% level. Misclassifications were mainly associated with uncommon classes, which were rarely sampled. Map outputs were between 68% and 87% identical. To improve classification accuracy in seabed mapping, we suggest that more studies on the effects of factors affecting the classification performance as well as comparative studies testing the performance of different approaches need to be carried out with a view to developing guidelines for selecting an appropriate method for a given dataset. In the meantime, classification accuracy might be improved by combining different techniques to hybrid approaches and multi-method ensembles.
Telemedicine and Diabetic Retinopathy: Review of Published Screening Programs
Tozer, Kevin; Woodward, Maria A.; Newman-Casey, Paula A.
2016-01-01
Background Diabetic Retinopathy (DR) is a leading cause of blindness worldwide even though successful treatments exist. Improving screening and treatment could avoid many cases of vision loss. However, due to an increasing prevalence of diabetes, traditional in-person screening for DR for every diabetic patient is not feasible. Telemedicine is one viable solution to provide high-quality and efficient screening to large number of diabetic patients. Purpose To provide a narrative review of large DR telemedicine screening programs. Methods Articles were identified through a comprehensive search of the English-language literature published between 2000 and 2014. Telemedicine screening programs were included for review if they had published data on at least 150 patients and had available validation studies supporting their model. Screening programs were then categorized according to their American Telemedicine Association Validation Level. Results Seven programs from the US and abroad were identified and included in the review. Three programs were Category 1 programs (Ophdiat, EyePacs, and Digiscope), two were Category 2 programs (Eye Check, NHS Diabetic Eye Screening Program), and two were Category 3 programs (Joslin Vision Network, Alberta Screening Program). No program was identified that claimed category 4 status. Programs ranged from community or city level programs to large nationwide programs including millions of individuals. The programs demonstrated a high level of clinical accuracy in screening for DR. There was no consensus amongst the programs regarding the need for dilation, need for stereoscopic images, or the level of training for approved image graders. Conclusion Telemedicine programs have been clinically validated and successfully implemented across the globe. They can provide a high-level of clinical accuracy for screening for DR while improving patient access in a cost-effective and scalable manner. PMID:27430019
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
A new vehicle emission inventory for China with high spatial and temporal resolution
NASA Astrophysics Data System (ADS)
Zheng, B.; Huo, H.; Zhang, Q.; Yao, Z. L.; Wang, X. T.; Yang, X. F.; Liu, H.; He, K. B.
2013-12-01
This study is the first in a series of papers that aim to develop high-resolution emission databases for different anthropogenic sources in China. Here we focus on on-road transportation. Because of the increasing impact of on-road transportation on regional air quality, developing an accurate and high-resolution vehicle emission inventory is important for both the research community and air quality management. This work proposes a new inventory methodology to improve the spatial and temporal accuracy and resolution of vehicle emissions in China. We calculate, for the first time, the monthly vehicle emissions (CO, NMHC, NOx, and PM2.5) for 2008 in 2364 counties (an administrative unit one level lower than city) by developing a set of approaches to estimate vehicle stock and monthly emission factors at county-level, and technology distribution at provincial level. We then introduce allocation weights for the vehicle kilometers traveled to assign the county-level emissions onto 0.05° × 0.05° grids based on the China Digital Road-network Map (CDRM). The new methodology overcomes the common shortcomings of previous inventory methods, including neglecting the geographical differences between key parameters and using surrogates that are weakly related to vehicle activities to allocate vehicle emissions. The new method has great advantages over previous methods in depicting the spatial distribution characteristics of vehicle activities and emissions. This work provides a better understanding of the spatial representation of vehicle emissions in China and can benefit both air quality modeling and management with improved spatial accuracy.
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
Demodulation algorithm for optical fiber F-P sensor.
Yang, Huadong; Tong, Xinglin; Cui, Zhang; Deng, Chengwei; Guo, Qian; Hu, Pan
2017-09-10
The demodulation algorithm is very important to improving the measurement accuracy of a sensing system. In this paper, the variable step size hill climbing search method will be initially used for the optical fiber Fabry-Perot (F-P) sensing demodulation algorithm. Compared with the traditional discrete gap transformation demodulation algorithm, the computation is greatly reduced by changing step size of each climb, which could achieve nano-scale resolution, high measurement accuracy, high demodulation rates, and large dynamic demodulation range. An optical fiber F-P pressure sensor based on micro-electro-mechanical system (MEMS) has been fabricated to carry out the experiment, and the results show that the resolution of the algorithm can reach nano-scale level, the sensor's sensitivity is about 2.5 nm/KPa, which is similar to the theoretical value, and this sensor has great reproducibility.
Bayesian network modelling of upper gastrointestinal bleeding
NASA Astrophysics Data System (ADS)
Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri
2013-09-01
Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.
Estimation and filtering techniques for high-accuracy GPS applications
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1989-01-01
Techniques for determination of very precise orbits for satellites of the Global Positioning System (GPS) are currently being studied and demonstrated. These techniques can be used to make cm-accurate measurements of station locations relative to the geocenter, monitor earth orientation over timescales of hours, and provide tropospheric and clock delay calibrations during observations made with deep space radio antennas at sites where the GPS receivers have been collocated. For high-earth orbiters, meter-level knowledge of position will be available from GPS, while at low altitudes, sub-decimeter accuracy will be possible. Estimation of satellite orbits and other parameters such as ground station positions is carried out with a multi-satellite batch sequential pseudo-epoch state process noise filter. Both square-root information filtering (SRIF) and UD-factorized covariance filtering formulations are implemented in the software.
Reliability Assessment for COTS Components in Space Flight Applications
NASA Technical Reports Server (NTRS)
Krishnan, G. S.; Mazzuchi, Thomas A.
2001-01-01
Systems built for space flight applications usually demand very high degree of performance and a very high level of accuracy. Hence, the design engineers are often prone to selecting state-of-art technologies for inclusion in their system design. The shrinking budgets also necessitate use of COTS (Commercial Off-The-Shelf) components, which are construed as being less expensive. The performance and accuracy requirements for space flight applications are much more stringent than those for the commercial applications. The quantity of systems designed and developed for space applications are much lower in number than those produced for the commercial applications. With a given set of requirements, are these COTS components reliable? This paper presents a model for assessing the reliability of COTS components in space applications and the associated affect on the system reliability. We illustrate the method with a real application.
NASA Technical Reports Server (NTRS)
Thomann, G. C.
1975-01-01
The complex dielectric constant of sea water is a function of salinity at 21 cm wavelength, and sea water salinity can be determined by a measurement of emissivity at 21 cm along with a measurement of thermodynamic temperature. Three aircraft and one helicopter experiments using two different 21 cm radiometers were conducted under different salinity and temperature conditions. Single or multiple ground truth measurements were used to calibrate the data in each experiment. It is inferred from these experiments that accuracies of 1 to 2%/OO are possible with a single surface calibration point necessary only every two hours if the following conditions are met--water temperatures above 20 C, salinities above 10%/OO, and level plane flight. More frequent calibration, constraint of the aircraft's orientation to the same as it was during calibration, and two point calibration (at a high and low salinity level) rather than single point calibration may give even better accuracies in some instances.
Underwater photogrammetric theoretical equations and technique
NASA Astrophysics Data System (ADS)
Fan, Ya-bing; Huang, Guiping; Qin, Gui-qin; Chen, Zheng
2011-12-01
In order to have a high level of accuracy of measurement in underwater close-range photogrammetry, this article deals with a study of three varieties of model equations according to the way of imaging upon the water. First, the paper makes a careful analysis for the two varieties of theoretical equations and finds out that there are some serious limitations in practical application and has an in-depth study for the third model equation. Second, one special project for this measurement has designed correspondingly. Finally, one rigid antenna has been tested by underwater photogrammetry. The experimental results show that the precision of 3D coordinates measurement is 0.94mm, which validates the availability and operability in practical application with this third equation. It can satisfy the measurement requirements of refraction correction, improving levels of accuracy of underwater close-range photogrammetry, as well as strong antijamming and stabilization.
NASA Technical Reports Server (NTRS)
Davis, D. D.; Weiss, M.; Clements, A.; Allan, D. W.
1982-01-01
The National Bureau of Standards/Global Positioning System (NBS/GPS) receiver is discussed. It is designed around the concept of obtaining high accuracy, low cost time and frequency comparisons between remote frequency standards and clocks with the intent to aid international time and frequency coordination. Preliminary tests of this comparison technique between Boulder, CO and Washington, D.C indicate the ability to do accurate time transfer to better that 10 ns, and frequency measurements to better than 1 part in 10 to the 14th power. The hardware and software of the receiver is detailed. The receiver is fully automatic with a built-in 0.1 ns resolution time interval counter. A microprocessor does data processing. Satellite signal stabilities are routinely at the 5 ns level for 15 s averages, and the internal receiver stabilities are at the 1 ns level.
NASA Astrophysics Data System (ADS)
Kubiček, K.; Mokler, P. H.; Mäckel, V.; Ullrich, J.; López-Urrutia, J. R. Crespo
2014-09-01
For the hydrogenlike Ar17+ ion, the 1s Lamb shift was absolutely determined with a 1.4% accuracy based on Lyman-α wavelength measurements that have negligible uncertainties from nuclear size effects. The result agrees with state-of-the-art quantum electrodynamics (QED) calculations, and demonstrates the suitability of Lyman-α transitions in highly charged ions as x-ray energy standards, accurate at the five parts-per-million level. For the heliumlike Ar16+ ion the transition energy for the 1s2p1P1→1s21S0 line was also absolutely determined on an even higher level of accuracy. Additionally, we present relative measurements of transitions in S15+,S14+, and Fe24+ ions. The data for the heliumlike S14+,Ar16+, and Fe24+ ions stringently confirm advanced bound-state QED predictions including screened QED terms that had recently been contested.
NASA Astrophysics Data System (ADS)
Groenenboom, G. C.; Wormer, P. E. S.; van der Avoird, A.; Mas, E. M.; Bukowski, R.; Szalewicz, K.
2000-10-01
Nearly exact six-dimensional quantum calculations of the vibration-rotation-tunneling (VRT) levels of the water dimer for values of the rotational quantum numbers J and K ⩽2 show that the SAPT-5s water pair potential presented in the preceding paper (paper I) gives a good representation of the experimental high-resolution far-infrared spectrum of the water dimer. After analyzing the sensitivity of the transition frequencies with respect to the linear parameters in the potential we could further improve this potential by using only one of the experimentally determined tunneling splittings of the ground state in (H2O)2. The accuracy of the resulting water pair potential, SAPT-5st, is established by comparison with the spectroscopic data of both (H2O)2 and (D2O)2: ground and excited state tunneling splittings and rotational constants, as well as the frequencies of the intermolecular vibrations.
Climate Benchmark Missions: CLARREO
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Young, David F.
2010-01-01
CLARREO (Climate Absolute Radiance and Refractivity Observatory) is one of the four Tier 1 missions recommended by the recent NRC decadal survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to rigorously observe climate change on decade time scales and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO mission accomplishes this critical objective through highly accurate and SI traceable decadal change observations sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. The same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. The CLARREO breakthrough in decadal climate change observations is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. These accuracy levels are determined both by the projected decadal changes as well as by the background natural variability that such signals must be detected against. The accuracy for decadal change traceability to SI standards includes uncertainties of calibration, sampling, and analysis methods. Unlike most other missions, all of the CLARREO requirements are judged not by instantaneous accuracy, but instead by accuracy in large time/space scale average decadal changes. Given the focus on decadal climate change, the NRC Decadal Survey concluded that the single most critical issue for decadal change observations was their lack of accuracy and low confidence in observing the small but critical climate change signals. CLARREO is the recommended attack on this challenge, and builds on the last decade of climate observation advances in the Earth Observing System as well as metrological advances at NIST (National Institute of Standards and Technology) and other standards laboratories.
"If It Feels Right, Do It": Intuitive Decision Making in a Sample of High-Level Sport Coaches.
Collins, Dave; Collins, Loel; Carson, Howie J
2016-01-01
Comprehensive understanding and application of decision making is important for the professional practice and status of sports coaches. Accordingly, building on a strong work base exploring the use of professional judgment and decision making (PJDM) in sport, we report a preliminary investigation into uses of intuition by high-level coaches. Two contrasting groups of high-level coaches from adventure sports (n = 10) and rugby union (n = 8), were interviewed on their experiences of using intuitive and deliberative decision making styles, the source of these skills, and the interaction between the two. Participants reported similarly high levels of usage to other professions. Interaction between the two styles was apparent to varying degrees, while the role of experience was seen as an important precursor to greater intuitive practice and employment. Initially intuitive then deliberate decision making was a particular feature, offering participants an immediate check on the accuracy and validity of the decision. Integration of these data with the extant literature and implications for practice are discussed.
Field comparison of several commercially available radon detectors.
Field, R W; Kross, B C
1990-01-01
To determine the accuracy and precision of commercially available radon detectors in a field setting, 15 detectors from six companies were exposed to radon and compared to a reference radon level. The detectors from companies that had already passed National Radon Measurement Proficiency Program testing had better precision and accuracy than those detectors awaiting proficiency testing. Charcoal adsorption detectors and diffusion barrier charcoal adsorption detectors performed very well, and the latter detectors displayed excellent time averaging ability. Alternatively, charcoal liquid scintillation detectors exhibited acceptable accuracy but poor precision, and bare alpha registration detectors showed both poor accuracy and precision. The mean radon level reported by the bare alpha registration detectors was 68 percent lower than the radon reference level. PMID:2368851
Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K
2011-01-10
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.
Bisgin, Halil; Bera, Tanmay; Ding, Hongjian; Semey, Howard G; Wu, Leihong; Liu, Zhichao; Barnes, Amy E; Langley, Darryl A; Pava-Ripoll, Monica; Vyas, Himansu J; Tong, Weida; Xu, Joshua
2018-04-25
Insect pests, such as pantry beetles, are often associated with food contaminations and public health risks. Machine learning has the potential to provide a more accurate and efficient solution in detecting their presence in food products, which is currently done manually. In our previous research, we demonstrated such feasibility where Artificial Neural Network (ANN) based pattern recognition techniques could be implemented for species identification in the context of food safety. In this study, we present a Support Vector Machine (SVM) model which improved the average accuracy up to 85%. Contrary to this, the ANN method yielded ~80% accuracy after extensive parameter optimization. Both methods showed excellent genus level identification, but SVM showed slightly better accuracy for most species. Highly accurate species level identification remains a challenge, especially in distinguishing between species from the same genus which may require improvements in both imaging and machine learning techniques. In summary, our work does illustrate a new SVM based technique and provides a good comparison with the ANN model in our context. We believe such insights will pave better way forward for the application of machine learning towards species identification and food safety.
NASA Technical Reports Server (NTRS)
Comber, Brian; Glazer, Stuart
2012-01-01
The James Webb Space Telescope (JWST) is an upcoming flagship observatory mission scheduled to be launched in 2018. Three of the four science instruments are passively cooled to their operational temperature range of 36K to 40K, and the fourth instrument is actively cooled to its operational temperature of approximately 6K. The requirement for multiple thermal zoned results in the instruments being thermally connected to five external radiators via individual high purity aluminum heat straps. Thermal-vacuum and thermal balance testing of the flight instruments at the Integrated Science Instrument Module (ISIM) element level will take place within a newly constructed shroud cooled by gaseous helium inside Goddard Space Flight Center's (GSFC) Space environment Simulator (SES). The flight external radiators are not available during ISIM-level thermal vacuum/thermal testing, so they will be replaced in test with stable and adjustable thermal boundaries with identical physical interfaces to the flight radiators. Those boundaries are provided by specially designed test hardware which also measures the heat flow within each of the five heat straps to an accuracy of less than 2 mW, which is less than 5% of the minimum predicted heat flow values. Measurement of the heat loads to this accuracy is essential to ISIM thermal model correlation, since thermal models are more accurately correlated when temperature data is supplemented by accurate knowledge of heat flows. It also provides direct verification by test of several high-level thermal requirements. Devices that measure heat flow in this manner have historically been referred to a "Q-meters". Perhaps the most important feature of the design of the JWST Q-meters is that it does not depend on the absolute accuracy of its temperature sensors, but rather on knowledge of precise heater power required to maintain a constant temperature difference between sensors on two stages, for which a table is empirically developed during a calibration campaign in a small chamber at GSFC. This paper provides a brief review of Q-meter design, and discusses the Q-meter calibration procedure including calibration chamber modifications and accommodations, handling of differing conditions between calibration and usage, the calibration process itself, and the results of the tests used to determine if the calibration is successful.
Significance of pregnancy test false negative results due to elevated levels of β-core fragment hCG.
Johnson, Sarah; Eapen, Saji; Smith, Peter; Warren, Graham; Zinaman, Michael
2017-01-01
Very high levels of β-core fragment human chorionic gonadotrophin (βcf-hCG) are reported to potentially cause false negative results in point-of-care (POC)/over-the-counter (OTC) pregnancy tests. To investigate this further, women's daily early morning urine samples, collected prior to conception and during pregnancy, were analysed for intact, free β-, and βcf-hCG. The proportion of βcf-hCG was found to be related to that of hCG produced and in circulation. Therefore, best practice for accuracy testing of POC/OTC pregnancy tests would be to test devices against clinical samples containing high levels of βcf-hCG as well as standards spiked with biologically relevant ratios.
NASA Astrophysics Data System (ADS)
Liu, Haijian; Wu, Changshan
2018-06-01
Crown-level tree species classification is a challenging task due to the spectral similarity among different tree species. Shadow, underlying objects, and other materials within a crown may decrease the purity of extracted crown spectra and further reduce classification accuracy. To address this problem, an innovative pixel-weighting approach was developed for tree species classification at the crown level. The method utilized high density discrete LiDAR data for individual tree delineation and Airborne Imaging Spectrometer for Applications (AISA) hyperspectral imagery for pure crown-scale spectra extraction. Specifically, three steps were included: 1) individual tree identification using LiDAR data, 2) pixel-weighted representative crown spectra calculation using hyperspectral imagery, with which pixel-based illuminated-leaf fractions estimated using a linear spectral mixture analysis (LSMA) were employed as weighted factors, and 3) representative spectra based tree species classification was performed through applying a support vector machine (SVM) approach. Analysis of results suggests that the developed pixel-weighting approach (OA = 82.12%, Kc = 0.74) performed better than treetop-based (OA = 70.86%, Kc = 0.58) and pixel-majority methods (OA = 72.26, Kc = 0.62) in terms of classification accuracy. McNemar tests indicated the differences in accuracy between pixel-weighting and treetop-based approaches as well as that between pixel-weighting and pixel-majority approaches were statistically significant.
Edmonds, Lisa A; Donovan, Neila J
2012-04-01
There is a pressing need for psychometrically sound naming materials for Spanish/English bilingual adults. To address this need, in this study the authors examined the psychometric properties of An Object and Action Naming Battery (An O&A Battery; Druks & Masterson, 2000) in bilingual speakers. Ninety-one Spanish/English bilinguals named O&A Battery items in English and Spanish. Responses underwent a Rasch analysis. Using correlation and regression analyses, the authors evaluated the effect of psycholinguistic (e.g., imageability) and participant (e.g., proficiency ratings) variables on accuracy. Rasch analysis determined unidimensionality across English and Spanish nouns and verbs and robust item-level psychometric properties, evidence for content validity. Few items did not fit the model, there were no ceiling or floor effects after uninformative and misfit items were removed, and items reflected a range of difficulty. Reliability coefficients were high, and the number of statistically different ability levels provided indices of sensitivity. Regression analyses revealed significant correlations between psycholinguistic variables and accuracy, providing preliminary construct validity. The participant variables that contributed most to accuracy were proficiency ratings and time of language use. Results suggest adequate content and construct validity of O&A items retained in the analysis for Spanish/English bilingual adults and support future efforts to evaluate naming in older bilinguals and persons with bilingual aphasia.