Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.
Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M
2016-01-01
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Evaluating Rater Accuracy in Rater-Mediated Assessments Using an Unfolding Model
ERIC Educational Resources Information Center
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W.
2016-01-01
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study
Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.
2016-01-01
Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917
Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs
ERIC Educational Resources Information Center
Bijleveld, Erik; Custers, Ruud; Aarts, Henk
2010-01-01
While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…
Blower, Sally; Go, Myong-Hyun
2011-07-19
Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.
Le Roux, Ronan
2015-04-01
The paper deals with the introduction of nanotechnology in biochips. Based on interviews and theoretical reflections, it explores blind spots left by technology assessment and ethical investigations. These have focused on possible consequences of increased diffusability of a diagnostic device, neglecting both the context of research as well as increased accuracy, despite it being a more essential feature of nanobiochip projects. Also, rather than one of many parallel aspects (technical, legal and social) in innovation processes, ethics is considered here as a ubiquitous system of choices between sometimes antagonistic values. Thus, the paper investigates what is at stake when accuracy is balanced with other practical values in different contexts. Dramatic nanotechnological increase of accuracy in biochips can raise ethical issues, since it is at odds with other values such as diffusability and reliability. But those issues will not be as revolutionary as is often claimed: neither in diagnostics, because accuracy of measurements is not accuracy of diagnostics; nor in research, because a boost in measurement accuracy is not sufficient to overcome significance-chasing malpractices. The conclusion extends to methodological recommendations.
Solianik, Rima; Satas, Andrius; Mickeviciene, Dalia; Cekanauskaite, Agne; Valanciene, Dovile; Majauskiene, Daiva; Skurvydas, Albertas
2018-06-01
This study aimed to explore the effect of prolonged speed-accuracy motor task on the indicators of psychological, cognitive, psychomotor and motor function. Ten young men aged 21.1 ± 1.0 years performed a fast- and accurate-reaching movement task and a control task. Both tasks were performed for 2 h. Despite decreased motivation, and increased perception of effort as well as subjective feeling of fatigue, speed-accuracy motor task performance improved during the whole period of task execution. After the motor task, the increased working memory function and prefrontal cortex oxygenation at rest and during conflict detection, and the decreased efficiency of incorrect response inhibition and visuomotor tracking were observed. The speed-accuracy motor task increased the amplitude of motor-evoked potentials, while grip strength was not affected. These findings demonstrate that to sustain the performance of 2-h speed-accuracy task under conditions of self-reported fatigue, task-relevant functions are maintained or even improved, whereas less critical functions are impaired.
Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition
Durmaz Incel, Ozlem
2015-01-01
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution. PMID:26445046
Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition.
Incel, Ozlem Durmaz
2015-10-05
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution.
Investigations of fluid-strain interaction using Plate Boundary Observatory borehole data
NASA Astrophysics Data System (ADS)
Boyd, Jeffrey Michael
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R
2010-09-29
Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account in future studies on speed-accuracy trade-offs.
Mind the gap: Increased inter-letter spacing as a means of improving reading performance.
Dotan, Shahar; Katzir, Tami
2018-06-05
Theeffects of text display, specificallywithin-word spacing, on children's reading at different developmental levels has barely been investigated.This study explored the influence of manipulating inter-letter spacing on reading performance (accuracy and rate) of beginner Hebrew readers compared with older readers and of low-achieving readers compared with age-matched high-achieving readers.A computer-based isolated word reading task was performed by 132 first and third graders. Words were displayed under two spacing conditions: standard spacing (100%) and increased spacing (150%). Words were balanced for length and frequency across conditions. Results indicated that increased spacing contributed to reading accuracy without affecting reading rate. Interestingly, all first graders benefitted fromthe spaced condition. Thiseffect was found only in long words but not in short words. Among third graders, only low-achieving readers gained in accuracy fromthespaced condition. Thetheoretical and clinical effects ofthefindings are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Exploring a Three-Level Model of Calibration Accuracy
ERIC Educational Resources Information Center
Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.
2014-01-01
We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…
Zavala, Mary Wassel; Yule, Arthur; Kwan, Lorna; Lambrechts, Sylvia; Maliski, Sally L; Litwin, Mark S
2016-11-01
To examine accuracy of patient-reported prostate-specific antigen (PSA) levels among indigent, uninsured men in a state-funded prostate cancer treatment program that provides case management, care coordination, and health education. Program evaluation. About 114 men with matched self- and lab-reported PSA levels at program enrollment and another time point within 18 months. Abstraction of self- and lab-reported PSA levels to determine self-report as "accurate" or "inaccurate," and evaluate accuracy change over time, before and after nursing interventions. Chi-square tests compared patients with accurate versus inaccurate PSA values. Nonlinear multivariate analyses explored trends in self-reported accuracy over time. Program enrollees receive prostate cancer education from a Nurse Case Manager (NCM), including significance of PSA levels. Men self-report PSA results to their NCM following lab draws and appointments. The NCM provides ongoing education about PSA levels. Of the sample, 46% (n = 53) accurately reported PSA levels. Accuracy of PSA self-reports improved with increasing time since program enrollment. Compared with men at public facilities, those treated at private facilities showed increasing accuracy in self-reported PSA (p = .038). A targeted nursing intervention may increase specific knowledge of PSA levels. Additionally, the provider/treatment setting significantly impacts a patient's disease education and knowledge. © 2016 Wiley Periodicals, Inc.
Mansour, Jamal K; Beaudry, Jennifer L; Bertrand, Michelle I; Kalmet, Natalie; Melsom, Elisabeth I; Lindsay, Roderick C L
2012-12-01
Prior research indicates that disguise negatively affects lineup identifications, but the mechanisms by which disguise works have not been explored, and different disguises have not been compared. In two experiments (Ns = 87 and 91) we manipulated degree of coverage by two different types of disguise: a stocking mask or sunglasses and toque (i.e., knitted hat). Participants viewed mock-crime videos followed by simultaneous or sequential lineups. Disguise and lineup type did not interact. In support of the view that disguise prevents encoding, identification accuracy generally decreased with degree of disguise. For the stocking disguise, however, full and 2/3 coverage led to approximately the same rate of correct identifications--which suggests that disrupting encoding of specific features may be as detrimental as disrupting a whole face. Accuracy was most affected by sunglasses and we discuss the role metacognitions may have played. Lineup selections decreased more slowly than accuracy as coverage by disguise increased, indicating witnesses are insensitive to the effect of encoding conditions on accuracy. We also explored the impact of disguise and lineup type on witnesses' confidence in their lineup decisions, though the results were not straightforward.
Emotion recognition from multichannel EEG signals using K-nearest neighbor classification.
Li, Mi; Xu, Hongpei; Liu, Xingwang; Lu, Shengfu
2018-04-27
Many studies have been done on the emotion recognition based on multi-channel electroencephalogram (EEG) signals. This paper explores the influence of the emotion recognition accuracy of EEG signals in different frequency bands and different number of channels. We classified the emotional states in the valence and arousal dimensions using different combinations of EEG channels. Firstly, DEAP default preprocessed data were normalized. Next, EEG signals were divided into four frequency bands using discrete wavelet transform, and entropy and energy were calculated as features of K-nearest neighbor Classifier. The classification accuracies of the 10, 14, 18 and 32 EEG channels based on the Gamma frequency band were 89.54%, 92.28%, 93.72% and 95.70% in the valence dimension and 89.81%, 92.24%, 93.69% and 95.69% in the arousal dimension. As the number of channels increases, the classification accuracy of emotional states also increases, the classification accuracy of the gamma frequency band is greater than that of the beta frequency band followed by the alpha and theta frequency bands. This paper provided better frequency bands and channels reference for emotion recognition based on EEG.
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Predictors of nutrition information comprehension in adulthood.
Miller, Lisa M Soederberg; Gibson, Tanja N; Applegate, Elizabeth A
2010-07-01
The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Ninety-three participants, ages 18-80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased-and knowledge increased-with age. When knowledge was statistically controlled, age declines in comprehension increased. Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Predictors of Nutrition Information Comprehension in Adulthood
Miller, Lisa M. Soederberg; Gibson, Tanja N.; Applegate, Elizabeth A.
2009-01-01
Objective The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Methods Ninety-three participants, ages 18 to 80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. Results In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased- and knowledge increased -with age. When knowledge was statistically controlled, age declines in comprehension increased. Conclusion Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Practice Implications Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. PMID:19854605
Exploration of the Components of Children's Reading Comprehension Using Rauding Theory.
ERIC Educational Resources Information Center
Rupley, William H.; And Others
A study explored an application of rauding theory to the developmental components that contribute to elementary-age children's reading comprehension. The relationships among cognitive power, auditory accuracy level, pronunciation (word recognition) level, rauding (comprehension) accuracy level, rauding rate (reading rate) level, and rauding…
Masdrakis, Vasilios G; Legaki, Emilia-Maria; Vaidakis, Nikolaos; Ploumpidis, Dimitrios; Soldatos, Constantin R; Papageorgiou, Charalambos; Papadimitriou, George N; Oulis, Panagiotis
2015-07-01
Increased heartbeat perception accuracy (HBP-accuracy) may contribute to the pathogenesis of Panic Disorder (PD) without or with Agoraphobia (PDA). Extant research suggests that HBP-accuracy is a rather stable individual characteristic, moreover predictive of worse long-term outcome in PD/PDA patients. However, it remains still unexplored whether HBP-accuracy adversely affects patients' short-term outcome after structured cognitive behaviour therapy (CBT) for PD/PDA. To explore the potential association between HBP-accuracy and the short-term outcome of a structured brief-CBT for the acute treatment of PDA. We assessed baseline HBP-accuracy using the "mental tracking" paradigm in 25 consecutive medication-free, CBT-naive PDA patients. Patients then underwent a structured, protocol-based, 8-session CBT by the same therapist. Outcome measures included the number of panic attacks during the past week, the Agoraphobic Cognitions Questionnaire (ACQ), and the Mobility Inventory-Alone subscale (MI-alone). No association emerged between baseline HBP-accuracy and posttreatment changes concerning number of panic attacks. Moreover, higher baseline HBP-accuracy was associated with significantly larger reductions in the scores of the ACQ and the MI-alone scales. Our results suggest that in PDA patients undergoing structured brief-CBT for the acute treatment of their symptoms, higher baseline HBP-accuracy is not associated with worse short-term outcome concerning panic attacks. Furthermore, higher baseline HBP-accuracy may be associated with enhanced therapeutic gains in agoraphobic cognitions and behaviours.
Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.
Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A
2018-05-14
Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.
Response Latency as a Predictor of the Accuracy of Children's Reports
ERIC Educational Resources Information Center
Ackerman, Rakefet; Koriat, Asher
2011-01-01
Researchers have explored various diagnostic cues to the accuracy of information provided by child eyewitnesses. Previous studies indicated that children's confidence in their reports predicts the relative accuracy of these reports, and that the confidence-accuracy relationship generally improves as children grow older. In this study, we examined…
Space Launch Systems Block 1B Preliminary Navigation System Design
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Park, Thomas; Anzalone, Evan; Smith, Austin; Strickland, Dennis; Patrick, Sean
2018-01-01
NASA is currently building the Space Launch Systems (SLS) Block 1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. In parallel, NASA is also designing the Block 1B launch vehicle. The Block 1B vehicle is an evolution of the Block 1 vehicle and extends the capability of the NASA launch vehicle. This evolution replaces the Interim Cryogenic Propulsive Stage (ICPS) with the Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability, increased robustness for manned missions, and the capability to execute more demanding missions so must the SLS Integrated Navigation System evolved to support those missions. This paper describes the preliminary navigation systems design for the SLS Block 1B vehicle. The evolution of the navigation hard-ware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1B vehicle navigation system is de-signed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. Additionally, the Block 1B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and robust algorithm design, including Fault Detection, Isolation, and Recovery (FDIR) logic.
Toth, Jeffrey P.; Daniels, Karen A.; Solinger, Lisa A.
2011-01-01
How do aging and prior knowledge affect memory and metamemory? We explored this question in the context of a dual-process approach to Judgments of Learning (JOLs) which require people to predict their ability to remember information at a later time. Young and older adults (n's = 36, mean ages = 20.2 & 73.1) studied the names of actors that were famous in the 1950s or 1990s, providing a JOL for each. Recognition memory for studied and unstudied actors was then assessed using a Recollect/Know/No-Memory (R/K/N) judgment task. Results showed that prior knowledge increased recollection in both age groups such that older adults recollected significantly more 1950s actors than younger adults. Also, for both age groups and both decades, actors judged R at test garnered significantly higher JOLs at study than actors judged K or N. However, while the young showed benefits of prior knowledge on relative JOL accuracy, older adults did not, showing lower levels of JOL accuracy for 1950s actors despite having higher recollection for, and knowledge about, those actors. Overall, the data suggest that prior knowledge can be a double-edged sword, increasing the availability of details that can support later recollection, but also increasing non-diagnostic feelings of familiarity that can reduce the accuracy of memory predictions. PMID:21480715
SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.
SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306
Stereoacuity versus fixation disparity as indicators for vergence accuracy under prismatic stress.
Kromeier, Miriam; Schmitt, Christina; Bach, Michael; Kommerell, Guntram
2003-01-01
Fixation disparity has been widely used as an indicator for vergence accuracy under prismatic stress. However, the targets used for measuring fixation disparity contain artificial features in that the fusional contours are thinned out. We considered that stereoacuity might be a preferable indicator of vergence accuracy, as stereo targets represent natural viewing conditions. We measured fixation disparity with a computer adaptation of Ogle's test and stereoacuity with the automatic Freiburg Stereoacuity Test. Eight subjects were examined under increasing base-in and base-out prisms. The response of fixation disparity to prismatic stress revealed the curve types described by Ogle and Crone. All eight subjects reached a stereoscopic threshold below 10 arcsec. In seven subjects the stereoscopic threshold increased before double vision occurred. Our data suggest that stereoacuity is suitable to assess the range of binocular vision under prismatic stress. As stereoacuity bears the advantage over fixation disparity in that it can be measured without introducing artificial viewing conditions, we suggest exploring whether stereoacuity under prismatic stress would be more meaningful in the work-up of asthenopic patients than is fixation disparity.
Exploring cognitive integration of basic science and its effect on diagnostic reasoning in novices.
Lisk, Kristina; Agur, Anne M R; Woods, Nicole N
2016-06-01
Integration of basic and clinical science knowledge is increasingly being recognized as important for practice in the health professions. The concept of 'cognitive integration' places emphasis on the value of basic science in providing critical connections to clinical signs and symptoms while accounting for the fact that clinicians may not spontaneously articulate their use of basic science knowledge in clinical reasoning. In this study we used a diagnostic justification test to explore the impact of integrated basic science instruction on novices' diagnostic reasoning process. Participants were allocated to an integrated basic science or clinical science training group. The integrated basic science group was taught the clinical features along with the underlying causal mechanisms of four musculoskeletal pathologies while the clinical science group was taught only the clinical features. Participants completed a diagnostic accuracy test immediately after initial learning, and one week later a diagnostic accuracy and justification test. The results showed that novices who learned the integrated causal mechanisms had superior diagnostic accuracy and better understanding of the relative importance of key clinical features. These findings further our understanding of cognitive integration by providing evidence of the specific changes in clinical reasoning when basic and clinical sciences are integrated during learning.
When the display matters: A multifaceted perspective on 3D geovisualizations
NASA Astrophysics Data System (ADS)
Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří
2017-04-01
This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.
Wedi, Nils P
2014-06-28
The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Relationship between accuracy and complexity when learning underarm precision throwing.
Valle, Maria Stella; Lombardo, Luciano; Cioni, Matteo; Casabona, Antonino
2018-06-12
Learning precision ball throwing was mostly studied to explore the early rapid improvement of accuracy, with poor attention on possible adaptive processes occurring later when the rate of improvement is reduced. Here, we tried to demonstrate that the strategy to select angle, speed and height at ball release can be managed during the learning periods following the performance stabilization. To this aim, we used a multivariate linear model with angle, speed and height as predictors of changes in accuracy. Participants performed underarm throws of a tennis ball to hit a target on the floor, 3.42 m away. Two training sessions (S1, S2) and one retention test were executed. Performance accuracy increased over the S1 and stabilized during the S2, with a rate of changes along the throwing axis slower than along the orthogonal axis. However, both the axes contributed to the performance changes over the learning and consolidation time. A stable relationship between the accuracy and the release parameters was observed only during S2, with a good fraction of the performance variance explained by the combination of speed and height. All the variations were maintained during the retention test. Overall, accuracy improvements and reduction in throwing complexity at the ball release followed separate timing over the course of learning and consolidation.
David M. Bell; Matthew J. Gregory; Heather M. Roberts; Raymond J. Davis; Janet L. Ohmann
2015-01-01
Accuracy assessments of remote sensing products are necessary for identifying map strengths and weaknesses in scientific and management applications. However, not all accuracy assessments are created equal. Motivated by a recent study published in Forest Ecology and Management (Volume 342, pages 8â20), we explored the potential limitations of accuracy assessments...
Using internet search engines and library catalogs to locate toxicology information.
Wukovitz, L D
2001-01-12
The increasing importance of the Internet demands that toxicologists become aquainted with its resources. To find information, researchers must be able to effectively use Internet search engines, directories, subject-oriented websites, and library catalogs. The article will explain these resources, explore their benefits and weaknesses, and identify skills that help the researcher to improve search results and critically evaluate sources for their relevancy, validity, accuracy, and timeliness.
Task-Based Variability in Children's Singing Accuracy
ERIC Educational Resources Information Center
Nichols, Bryan E.
2013-01-01
The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…
Concept Mapping Improves Metacomprehension Accuracy among 7th Graders
ERIC Educational Resources Information Center
Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2012-01-01
Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
Stereotype Accuracy: Toward Appreciating Group Differences.
ERIC Educational Resources Information Center
Lee, Yueh-Ting, Ed.; And Others
The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
The Accuracy of Gender Stereotypes Regarding Occupations.
ERIC Educational Resources Information Center
Beyer, Sylvia; Finnegan, Andrea
Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…
Orbit Determination Issues for Libration Point Orbits
NASA Technical Reports Server (NTRS)
Beckman, Mark; Bauer, Frank (Technical Monitor)
2002-01-01
Libration point mission designers require knowledge of orbital accuracy for a variety of analyses including station keeping control strategies, transfer trajectory design, and formation and constellation control. Past publications have detailed orbit determination (OD) results from individual libration point missions. This paper collects both published and unpublished results from four previous libration point missions (ISEE (International Sun-Earth Explorer) -3, SOHO (Solar and Heliospheric Observatory), ACE (Advanced Composition Explorer) and MAP (Microwave Anisotropy Probe)) supported by Goddard Space Flight Center's Guidance, Navigation & Control Center. The results of those missions are presented along with OD issues specific to each mission. All past missions have been limited to ground based tracking through NASA ground sites using standard range and Doppler measurement types. Advanced technology is enabling other OD options including onboard navigation using seaboard attitude sensors and the use of the Very Long Baseline Interferometry (VLBI) measurement Delta Differenced One-Way Range (DDOR). Both options potentially enable missions to reduce coherent dedicated tracking passes while maintaining orbital accuracy. With the increased projected loading of the DSN (Deep Space Network), missions must find alternatives to the standard OD scenario.
Designing Delta-DOR acquisition strategies to determine highly elliptical earth orbits
NASA Technical Reports Server (NTRS)
Frauenholz, R. B.
1986-01-01
Delta-DOR acquisition strategies are designed for use in determining highly elliptical earth orbits. The requirements for a possible flight demonstration are evaluated for the Charged Composition Explorer spacecraft of the Active Magnetospheric Particle Tracer Explorers. The best-performing strategy uses data spanning the view periods of two orthogonal baselines near the same orbit periapse. The rapidly changing viewing geometry yields both angular position and velocity information, but each observation may require a different reference quasar. The Delta-DOR data noise is highly dependent on acquisition geometry, varying several orders of magnitude across the baseline view periods. Strategies are selected to minimize the measurement noise predicted by a theoretical model. Although the CCE transponder is limited by S-band and a small bandwidth, the addition of Delta-DOR to coherent Doppler and range improves the one-sigma apogee position accuracy by more than an order of magnitude. Additional Delta-DOR accuracy improvements possible using dual-frequency (S/X) calibration, increased spanned bandwidth, and water-vapor radiometry are presented for comparison. With these benefits, the residual Delta-DOR data noise is primarily due to quasar position uncertainties.
Diagnostic Accuracy of Tests for Polyuria in Lithium-Treated Patients.
Kinahan, James Conor; NiChorcorain, Aoife; Cunningham, Sean; Freyne, Aideen; Cooney, Colm; Barry, Siobhan; Kelly, Brendan D
2015-08-01
In lithium-treated patients, polyuria increases the risk of dehydration and lithium toxicity. If detected early, it is reversible. Despite its prevalence and associated morbidity in clinical practice, it remains underrecognized and therefore undertreated. The 24-hour urine collection is limited by its convenience and practicality. This study explores the diagnostic accuracy of alternative tests such as questionnaires on subjective polyuria, polydipsia, nocturia (dichotomous and ordinal responses), early morning urine sample osmolality (EMUO), and fluid intake record (FIR). This is a cross-sectional study of 179 lithium-treated patients attending a general adult and an old age psychiatry service. Participants completed the tests after completing an accurate 24-hour urine collection. The diagnostic accuracy of the individual tests was explored using the appropriate statistical techniques. Seventy-nine participants completed all of the tests. Polydipsia severity, EMUO, and FIR significantly differentiated the participants with polyuria (area under the receiver operating characteristic curve of 0.646, 0.760, and 0.846, respectively). Of the tests investigated, the FIR made the largest significant change in the probability that a patient experiences polyuria (<2000 mL/24 hours; interval likelihood ratio, 0.18 and >3500 mL/24 hours; interval likelihood ratio, 14). Symptomatic questioning, EMUO, and an FIR could be used in clinical practice to inform the prescriber of the probability that a lithium-treated patient is experiencing polyuria.
Do recommender systems benefit users? a modeling approach
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Wade, Ryckie G; Itte, Vinay; Rankine, James J; Ridgway, John P; Bourke, Grainne
2018-03-01
Identification of root avulsions is of critical importance in traumatic brachial plexus injuries because it alters the reconstruction and prognosis. Pre-operative magnetic resonance imaging is gaining popularity, but there is limited and conflicting data on its diagnostic accuracy for root avulsion. This cohort study describes consecutive patients requiring brachial plexus exploration following trauma between 2008 and 2016. The index test was magnetic resonance imaging at 1.5 Tesla and the reference test was operative exploration of the supraclavicular plexus. Complete data from 29 males was available. The diagnostic accuracy of magnetic resonance imaging for root avulsion(s) of C5-T1 was 79%. The diagnostic accuracy of a pseudomeningocoele as a surrogate marker of root avulsion(s) of C5-T1 was 68%. We conclude that pseudomeningocoles were not a reliable sign of root avulsion and magnetic resonance imaging has modest diagnostic accuracy for root avulsions in the context of adult traumatic brachial plexus injuries. III.
NASA Technical Reports Server (NTRS)
Gramling, C. J.; Long, A. C.; Lee, T.; Ottenstein, N. A.; Samii, M. V.
1991-01-01
A Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) is currently being developed by NASA to provide a high accuracy autonomous navigation capability for users of TDRSS and its successor, the Advanced TDRSS (ATDRSS). The fully autonomous user onboard navigation system will support orbit determination, time determination, and frequency determination, based on observation of a continuously available, unscheduled navigation beacon signal. A TONS experiment will be performed in conjunction with the Explorer Platform (EP) Extreme Ultraviolet Explorer (EUVE) mission to flight quality TONS Block 1. An overview is presented of TONS and a preliminary analysis of the navigation accuracy anticipated for the TONS experiment. Descriptions of the TONS experiment and the associated navigation objectives, as well as a description of the onboard navigation algorithms, are provided. The accuracy of the selected algorithms is evaluated based on the processing of realistic simulated TDRSS one way forward link Doppler measurements. The analysis process is discussed and the associated navigation accuracy results are presented.
Creativity in gifted identification: increasing accuracy and diversity.
Luria, Sarah R; O'Brien, Rebecca L; Kaufman, James C
2016-08-01
Many federal definitions and popular theories of giftedness specify creativity as a core component. Nevertheless, states rely primarily on measures of intelligence for giftedness identification. As minority and culturally diverse students continue to be underrepresented in gifted programs, it is reasonable to ask if increasing the prominence of creativity in gifted identification may help increase balance and equity. In this paper, we explore both layperson and psychometric conceptions of bias and suggest that adding creativity measures to the identification process alleviates both perceptions and the presence of bias. We recognize, however, the logistic and measurement-related challenges to including creativity assessments. © 2016 New York Academy of Sciences.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Ekins, Kylie; Morphet, Julia
2015-11-01
The Australasian Triage Scale aims to ensure that the triage category allocated, reflects the urgency with which the patient needs medical assistance. This is dependent on triage nurse accuracy in decision making. The Australasian Triage Scale also aims to facilitate triage decision consistency between individuals and organisations. Various studies have explored the accuracy and consistency of triage decisions throughout Australia, yet no studies have specifically focussed on triage decision making in rural health services. Further, no standard has been identified by which accuracy or consistency should be measured. Australian emergency departments are measured against a set of standard performance indicators, including time from triage to patient review, and patient length of stay. There are currently no performance indicators for triage consistency. An online questionnaire was developed to collect demographic data and measure triage accuracy and consistency. The questionnaire utilised previously validated triage scenarios.(1) Triage decision accuracy was measured, and consistency was compared by health site type using Fleiss' kappa. Forty-six triage nurses participated in this study. The accuracy of participants' triage decision-making decreased with each less urgent triage category. Post-graduate qualifications had no bearing on triage accuracy. There was no significant difference in the consistency of decision-making between paediatric and adult scenarios. Overall inter-rater agreement using Fleiss' kappa coefficient, was 0.4. This represents a fair-to-good level of inter-rater agreement. A standard definition of accuracy and consistency in triage nurse decision making is required. Inaccurate triage decisions can result in increased morbidity and mortality. It is recommended that emergency department performance indicator thresholds be utilised as a benchmark for national triage consistency. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Forster, Marie-Therese; Hoecker, Alexander Claudius; Kang, Jun-Suk; Quick, Johanna; Seifert, Volker; Hattingen, Elke; Hilker, Rüdiger; Weise, Lutz Martin
2015-06-01
Tractography based on diffusion tensor imaging has become a popular tool for delineating white matter tracts for neurosurgical procedures. To explore whether navigated transcranial magnetic stimulation (nTMS) might increase the accuracy of fiber tracking. Tractography was performed according to both anatomic delineation of the motor cortex (n = 14) and nTMS results (n = 9). After implantation of the definitive electrode, stimulation via the electrode was performed, defining a stimulation threshold for eliciting motor evoked potentials recorded during deep brain stimulation surgery. Others have shown that of arm and leg muscles. This threshold was correlated with the shortest distance between the active electrode contact and both fiber tracks. Results were evaluated by correlation to motor evoked potential monitoring during deep brain stimulation, a surgical procedure causing hardly any brain shift. Distances to fiber tracks clearly correlated with motor evoked potential thresholds. Tracks based on nTMS had a higher predictive value than tracks based on anatomic motor cortex definition (P < .001 and P = .005, respectively). However, target site, hemisphere, and active electrode contact did not influence this correlation. The implementation of tractography based on nTMS increases the accuracy of fiber tracking. Moreover, this combination of methods has the potential to become a supplemental tool for guiding electrode implantation.
Nendaz, Mathieu R; Gut, Anne M; Perrier, Arnaud; Louis-Simonet, Martine; Blondon-Choa, Katherine; Herrmann, François R; Junod, Alain F; Vu, Nu V
2006-01-01
BACKGROUND Clinical experience, features of data collection process, or both, affect diagnostic accuracy, but their respective role is unclear. OBJECTIVE, DESIGN Prospective, observational study, to determine the respective contribution of clinical experience and data collection features to diagnostic accuracy. METHODS Six Internists, 6 second year internal medicine residents, and 6 senior medical students worked up the same 7 cases with a standardized patient. Each encounter was audiotaped and immediately assessed by the subjects who indicated the reasons underlying their data collection. We analyzed the encounters according to diagnostic accuracy, information collected, organ systems explored, diagnoses evaluated, and final decisions made, and we determined predictors of diagnostic accuracy by logistic regression models. RESULTS Several features significantly predicted diagnostic accuracy after correction for clinical experience: early exploration of correct diagnosis (odds ratio [OR] 24.35) or of relevant diagnostic hypotheses (OR 2.22) to frame clinical data collection, larger number of diagnostic hypotheses evaluated (OR 1.08), and collection of relevant clinical data (OR 1.19). CONCLUSION Some features of data collection and interpretation are related to diagnostic accuracy beyond clinical experience and should be explicitly included in clinical training and modeled by clinical teachers. Thoroughness in data collection should not be considered a privileged way to diagnostic success. PMID:17105525
Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad
2017-04-01
Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.
2015-01-01
explain the accuracy and speed increase. Exploring the underlying connections of the energy evolution of these methods and the energy landscape for the...unwanted trivial global minimizers from the energy landscape . Note that the second eigenvector of the Laplacian already provides a solution to a cut...von Brecht. Convergence and energy landscape for Cheeger cut clustering. Advances in Neural Information Processing Systems, 25:1394– 1402, 2012. [13] X
ERIC Educational Resources Information Center
Perfect, Timothy J.; Weber, Nathan
2012-01-01
Explorations of memory accuracy control normally contrast forced-report with free-report performance across a set of items and show a trade-off between memory quantity and accuracy. However, this memory control framework has not been tested with lineup identifications that may involve rejection of all alternatives. A large-scale (N = 439) lineup…
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
Accuracy of tracking forest machines with GPS
M.W. Veal; S.E. Taylor; T.P. McDonald; D.K. McLemore; M.R. Dunn
2001-01-01
This paper describes the results of a study that measured the accuracy of using GPS to track movement of forest machines. Two different commercially available GPS receivers (Trimble ProXR and GeoExplorer II) were used to track\\r\
A Multimodal Communication Program for Aphasia during Inpatient Rehabilitation: A Case Study
Wallace, Sarah E.; Purdy, Mary; Skidmore, Elizabeth
2014-01-01
BACKGROUND Communication is essential for successful rehabilitation, yet few aphasia treatments have been investigated during the acute stroke phase. Alternative modality use including gesturing, writing, or drawing has been shown to increase communicative effectiveness in people with chronic aphasia. Instruction in alternative modality use during acute stroke may increase patient communication and participation, therefore resulting in fewer adverse situations and improved rehabilitation outcomes. OBJECTIVE The study purpose was to explore a multimodal communication program for aphasia (MCPA) implemented during acute stroke rehabilitation. MCPA aims to improve communication modality production, and to facilitate switching among modalities to resolve communication breakdowns. METHODS Two adults with severe aphasia completed MCPA beginning at 2 and 3 weeks post onset a single left-hemisphere stroke. Probes completed during each session allowed for evaluation of modality production and modality switching accuracy. RESULTS Participants completed MCPA (10 and 14 treatment sessions respectively) and their performance on probes suggested increased accuracy in the production of various alternate communication modalities. However, increased switching to an alternate modality was noted for only one participant. CONCLUSIONS Further investigation of multimodal treatment during inpatient rehabilitation is warranted. In particular, comparisons between multimodal and standard treatments would help determine appropriate interventions for this setting. PMID:25227547
Thinking about muscles: the neuromuscular effects of attentional focus on accuracy and fatigue.
Lohse, Keith R; Sherwood, David E
2012-07-01
Although the effects of attention on movement execution are well documented behaviorally, much less research has been done on the neurophysiological changes that underlie attentional focus effects. This study presents two experiments exploring effects of attention during an isometric plantar-flexion task using surface electromyography (sEMG). Participants' attention was directed either externally (towards the force plate they were pushing against) or internally (towards their own leg, specifically the agonist muscle). Experiment 1 tested the effects of attention on accuracy and efficiency of force produced at three target forces (30, 60, and 100% of the maximum voluntary contraction; MVC). An internal focus of attention reduced the accuracy of force being produced and increased cocontraction of the antagonist muscle. Error on a given trial was positively correlated with the magnitude of cocontraction on that trial. Experiment 2 tested the effects of attention on muscular fatigue at 30, 60 and 100%MVC. An internal focus of attention led to less efficient intermuscular coordination, especially early in the contraction. These results suggest that an internal focus of attention disrupts efficient motor control in force production resulting in increased cocontraction, which potentially explains other neuromechanical findings (e.g. reduced functional variability with an internal focus). Copyright © 2012 Elsevier B.V. All rights reserved.
Hu, Cheng; Kong, Shaoyang; Wang, Rui; Long, Teng; Fu, Xiaowei
2018-04-03
Migration is a key process in the population dynamics of numerous insect species, including many that are pests or vectors of disease. Identification of insect migrants is critically important to studies of insect migration. Radar is an effective means of monitoring nocturnal insect migrants. However, species identification of migrating insects is often unachievable with current radar technology. Special-purpose entomological radar can measure radar cross-sections (RCSs) from which the insect mass, wingbeat frequency and body length-to-width ratio (a measure of morphological form) can be estimated. These features may be valuable for species identification. This paper explores the identification of insect migrants based on the mass, wingbeat frequency and length-to-width ratio, and body length is also introduced to assess the benefit of adding another variable. A total of 23 species of migratory insects captured by a searchlight trap are used to develop a classification model based on decision-tree support vector machine method. The results reveal that the identification accuracy exceeds 80% for all species if the mass, wingbeat frequency and length-to-width ratio are utilized, and the addition of body length is shown to further increase accuracy. It is also shown that improving the precision of the measurements leads to increased identification accuracy.
Approximating the Basset force by optimizing the method of van Hinsberg et al.
NASA Astrophysics Data System (ADS)
Casas, G.; Ferrer, A.; Oñate, E.
2018-01-01
In this work we put the method proposed by van Hinsberg et al. [29] to the test, highlighting its accuracy and efficiency in a sequence of benchmarks of increasing complexity. Furthermore, we explore the possibility of systematizing the way in which the method's free parameters are determined by generalizing the optimization problem that was considered originally. Finally, we provide a list of worked-out values, ready for implementation in large-scale particle-laden flow simulations.
The Evolution of Deep Space Navigation: 1989-1999
NASA Technical Reports Server (NTRS)
Wood, Lincoln J.
2008-01-01
The exploration of the planets of the solar system using robotic vehicles has been underway since the early 1960s. During this time the navigational capabilities employed have increased greatly in accuracy, as required by the scientific objectives of the missions and as enabled by improvements in technology. This paper is the second in a chronological sequence dealing with the evolution of deep space navigation. The time interval covered extends from the 1989 launch of the Magellan spacecraft to Venus through a multiplicity of planetary exploration activities in 1999. The paper focuses on the observational techniques that have been used to obtain navigational information, propellant-efficient means for modifying spacecraft trajectories, and the computational methods that have been employed, tracing their evolution through a dozen planetary missions.
Laboratory Astrophysics: Enabling Scientific Discovery and Understanding
NASA Technical Reports Server (NTRS)
Kirby, K.
2006-01-01
NASA's Science Strategic Roadmap for Universe Exploration lays out a series of science objectives on a grand scale and discusses the various missions, over a wide range of wavelengths, which will enable discovery. Astronomical spectroscopy is arguably the most powerful tool we have for exploring the Universe. Experimental and theoretical studies in Laboratory Astrophysics convert "hard-won data into scientific understanding". However, the development of instruments with increasingly high spectroscopic resolution demands atomic and molecular data of unprecedented accuracy and completeness. How to meet these needs, in a time of severe budgetary constraints, poses a significant challenge both to NASA, the astronomical observers and model-builders, and the laboratory astrophysics community. I will discuss these issues, together with some recent examples of productive astronomy/lab astro collaborations.
Pous-Serrano, S; Frasson, M; Palasí Giménez, R; Sanchez-Jordá, G; Pamies-Guilabert, J; Llavador Ros, M; Nos Mateu, P; Garcia-Granero, E
2017-05-01
To assess the accuracy of magnetic resonance enterography in predicting the extension, location and characteristics of the small bowel segments affected by Crohn's disease. This is a prospective study including a consecutive series of 38 patients with Crohn's disease of the small bowel who underwent surgery at a specialized colorectal unit of a tertiary hospital. Preoperative magnetic resonance enterography was performed in all patients, following a homogeneous protocol, within the 3 months prior to surgery. A thorough exploration of the small bowel was performed during the surgical procedure; calibration spheres were used according to the discretion of the surgeon. The accuracy of magnetic resonance enterography in detecting areas affected by Crohn's disease in the small bowel was assessed. The findings of magnetic resonance enterography were compared with surgical and pathological findings. Thirty-eight patients with 81 lesions were included in the study. During surgery, 12 lesions (14.8%) that were not described on magnetic resonance enterography were found. Seven of these were detected exclusively by the use of calibration spheres, passing unnoticed at surgical exploration. Magnetic resonance enterography had 90% accuracy in detecting the location of the stenosis (75.0% sensitivity, 95.7% specificity). Magnetic resonance enterography did not precisely diagnose the presence of an inflammatory phlegmon (accuracy 46.2%), but it was more accurate in detecting abscesses or fistulas (accuracy 89.9% and 98.6%, respectively). Magnetic resonance enterography is a useful tool in the preoperative assessment of patients with Crohn's disease. However, a thorough intra-operative exploration of the entire small bowel is still necessary. Colorectal Disease © 2017 The Association of Coloproctology of Great Britain and Ireland.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.
Exploration of Force Myography and surface Electromyography in hand gesture classification.
Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo
2017-03-01
Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
Force Myography to Control Robotic Upper Extremity Prostheses: A Feasibility Study
Cho, Erina; Chen, Richard; Merhi, Lukas-Karim; Xiao, Zhen; Pousett, Brittany; Menon, Carlo
2016-01-01
Advancement in assistive technology has led to the commercial availability of multi-dexterous robotic prostheses for the upper extremity. The relatively low performance of the currently used techniques to detect the intention of the user to control such advanced robotic prostheses, however, limits their use. This article explores the use of force myography (FMG) as a potential alternative to the well-established surface electromyography. Specifically, the use of FMG to control different grips of a commercially available robotic hand, Bebionic3, is investigated. Four male transradially amputated subjects participated in the study, and a protocol was developed to assess the prediction accuracy of 11 grips. Different combinations of grips were examined, ranging from 6 up to 11 grips. The results indicate that it is possible to classify six primary grips important in activities of daily living using FMG with an accuracy of above 70% in the residual limb. Additional strategies to increase classification accuracy, such as using the available modes on the Bebionic3, allowed results to improve up to 88.83 and 89.00% for opposed thumb and non-opposed thumb modes, respectively. PMID:27014682
Evaluation of a novel flexible snake robot for endoluminal surgery.
Patel, Nisha; Seneci, Carlo A; Shang, Jianzhong; Leibrandt, Konrad; Yang, Guang-Zhong; Darzi, Ara; Teare, Julian
2015-11-01
Endoluminal therapeutic procedures such as endoscopic submucosal dissection are increasingly attractive given the shift in surgical paradigm towards minimally invasive surgery. This novel three-channel articulated robot was developed to overcome the limitations of the flexible endoscope which poses a number of challenges to endoluminal surgery. The device enables enhanced movement in a restricted workspace, with improved range of motion and with the accuracy required for endoluminal surgery. To evaluate a novel flexible robot for therapeutic endoluminal surgery. Bench-top studies. Research laboratory. Targeting and navigation tasks of the robot were performed to explore the range of motion and retroflexion capabilities. Complex endoluminal tasks such as endoscopic mucosal resection were also simulated. Successful completion, accuracy and time to perform the bench-top tasks were the main outcome measures. The robot ranges of movement, retroflexion and navigation capabilities were demonstrated. The device showed significantly greater accuracy of targeting in a retroflexed position compared to a conventional endoscope. Bench-top study and small study sample. We were able to demonstrate a number of simulated endoscopy tasks such as navigation, targeting, snaring and retroflexion. The improved accuracy of targeting whilst in a difficult configuration is extremely promising and may facilitate endoluminal surgery which has been notoriously challenging with a conventional endoscope.
Zhou, Tao; Li, Zhaofu; Pan, Jianjun
2018-01-27
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.
He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L
2018-04-01
SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.
The Accuracy of Computer-Assisted Feedback and Students' Responses to It
ERIC Educational Resources Information Center
Lavolette, Elizabeth; Polio, Charlene; Kahng, Jimin
2015-01-01
Various researchers in second language acquisition have argued for the effectiveness of immediate rather than delayed feedback. In writing, truly immediate feedback is impractical, but computer-assisted feedback provides a quick way of providing feedback that also reduces the teacher's workload. We explored the accuracy of feedback from…
Enhancing voluntary imitation through attention and motor imagery.
Bek, Judith; Poliakoff, Ellen; Marshall, Hannah; Trueman, Sophie; Gowen, Emma
2016-07-01
Action observation activates brain areas involved in performing the same action and has been shown to increase motor learning, with potential implications for neurorehabilitation. Recent work indicates that the effects of action observation on movement can be increased by motor imagery or by directing attention to observed actions. In voluntary imitation, activation of the motor system during action observation is already increased. We therefore explored whether imitation could be further enhanced by imagery or attention. Healthy participants observed and then immediately imitated videos of human hand movement sequences, while movement kinematics were recorded. Two blocks of trials were completed, and after the first block participants were instructed to imagine performing the observed movement (Imagery group, N = 18) or attend closely to the characteristics of the movement (Attention group, N = 15), or received no further instructions (Control group, N = 17). Kinematics of the imitated movements were modulated by instructions, with both Imagery and Attention groups being closer in duration, peak velocity and amplitude to the observed model compared with controls. These findings show that both attention and motor imagery can increase the accuracy of imitation and have implications for motor learning and rehabilitation. Future work is required to understand the mechanisms by which these two strategies influence imitation accuracy.
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.
Morota, Gota
2017-12-20
Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.
Supervised Learning Applied to Air Traffic Trajectory Classification
NASA Technical Reports Server (NTRS)
Bosson, Christabelle S.; Nikoleris, Tasos
2018-01-01
Given the recent increase of interest in introducing new vehicle types and missions into the National Airspace System, a transition towards a more autonomous air traffic control system is required in order to enable and handle increased density and complexity. This paper presents an exploratory effort of the needed autonomous capabilities by exploring supervised learning techniques in the context of aircraft trajectories. In particular, it focuses on the application of machine learning algorithms and neural network models to a runway recognition trajectory-classification study. It investigates the applicability and effectiveness of various classifiers using datasets containing trajectory records for a month of air traffic. A feature importance and sensitivity analysis are conducted to challenge the chosen time-based datasets and the ten selected features. The study demonstrates that classification accuracy levels of 90% and above can be reached in less than 40 seconds of training for most machine learning classifiers when one track data point, described by the ten selected features at a particular time step, per trajectory is used as input. It also shows that neural network models can achieve similar accuracy levels but at higher training time costs.
Enhancing lineup identification accuracy: two codes are better than one.
Melara, R D; DeWitt-Rickards, T S; O'Brien, T P
1989-10-01
Ways of improving identification accuracy were explored by comparing the conventional visual lineup with an auditory/visual lineup, one that paired color photographs with voice recordings. This bimodal lineup necessitated sequential presentation of lineup members; Experiment 1 showed that performance in sequential lineups was better than performance in traditional simultaneous lineups. In Experiments 2A and 2B unimodal and bimodal lineups were compared by using a multiple-lineup paradigm: Ss viewed 3 videotaped episodes depicting standard police procedures and were tested in 4 sequential lineups. Bimodal lineups were more diagnostic than either visual or auditory lineups alone. The bimodal lineup led to a 126% improvement in number of correct identifications over the conventional visual lineup, with no concomitant increase in number of false identifications. These results imply strongly that bimodal procedures should be adopted in real-world lineups. The nature of memorial processes underlying this bimodal advantage is discussed.
The Use of Visual Thinking Strategies and Art to Help Nurses Find Their Voices.
Moorman, Margaret
2017-08-01
Health care is increasingly complex, as nurses navigate working in teams and conveying critical information to others. Clear communication and accuracy are critical for nurses because they communicate to patients and other members of the health care team. Art, and more specifically, Visual Thinking Strategies (VTS), are ways for nurses to practice communication and clear articulation of ideas. VTS also allows nurses to explore finding their voices and working with others to provide safe and effective communication among the team, including patients and their families.
Stochastic localization of microswimmers by photon nudging.
Bregulla, Andreas P; Yang, Haw; Cichos, Frank
2014-07-22
Force-free trapping and steering of single photophoretically self-propelled Janus-type particles using a feedback mechanism is experimentally demonstrated. Realtime information on particle position and orientation is used to switch the self-propulsion mechanism of the particle optically. The orientational Brownian motion of the particle thereby provides the reorientation mechanism for the microswimmer. The particle size dependence of the photophoretic propulsion velocity reveals that photon nudging provides an increased position accuracy for decreasing particle radius. The explored steering mechanism is suitable for navigation in complex biological environments and in-depth studies of collective swimming effects.
NASA Technical Reports Server (NTRS)
Rock, Stephen M.; LeMaster, Edward A.
2001-01-01
Pseudolites can extend the availability of GPS-type positioning systems to a wide range of applications not possible with satellite-only GPS. One such application is Mars exploration, where the centimeter-level accuracy and high repeatability of CDGPS would make it attractive for rover positioning during autonomous exploration, sample collection, and habitat construction if it were available. Pseudolites distributed on the surface would allow multiple rovers and/or astronauts to share a common navigational reference. This would help enable cooperation for complicated science tasks, reducing the need for instructions from Earth and increasing the likelihood of mission success. Conventional GPS Pseudolite arrays require that the devices be pre-calibrated through a Survey of their locations, typically to sub-centimeter accuracy. This is a problematic task for robots on the surface of another planet. By using the GPS signals that the Pseudolites broadcast, however, it is possible to have the array self-survey its own relative locations, creating a SelfCalibrating Pseudolite Array (SCPA). This requires the use of GPS transceivers instead of standard pseudolites. Surveying can be done either at carrier- or code-phase levels. An overview of SCPA capabilities, system requirements, and self-calibration algorithms is presented in another work. The Aerospace Robotics Laboratory at Statif0id has developed a fully operational prototype SCPA. The array is able to determine the range between any two transceivers with either code- or carrier-phase accuracy, and uses this inter-transceiver ranging to determine the at-ray geometry. This paper presents results from field tests conducted at Stanford University demonstrating the accuracy of inter-transceiver ranging and its viability and utility for array localization, and shows how transceiver motion may be utilized to refine the array estimate by accurately determining carrier-phase integers and line biases. It also summarizes the overall system requirements and architecture, and describes the hardware and software used in the prototype system.
Methodology for Prototyping Increased Levels of Automation for Spacecraft Rendezvous Functions
NASA Technical Reports Server (NTRS)
Hart, Jeremy J.; Valasek, John
2007-01-01
The Crew Exploration Vehicle necessitates higher levels of automation than previous NASA vehicles, due to program requirements for automation, including Automated Rendezvous and Docking. Studies of spacecraft development often point to the locus of decision-making authority between humans and computers (i.e. automation) as a prime driver for cost, safety, and mission success. Therefore, a critical component in the Crew Exploration Vehicle development is the determination of the correct level of automation. To identify the appropriate levels of automation and autonomy to design into a human space flight vehicle, NASA has created the Function-specific Level of Autonomy and Automation Tool. This paper develops a methodology for prototyping increased levels of automation for spacecraft rendezvous functions. This methodology is used to evaluate the accuracy of the Function-specific Level of Autonomy and Automation Tool specified levels of automation, via prototyping. Spacecraft rendezvous planning tasks are selected and then prototyped in Matlab using Fuzzy Logic techniques and existing Space Shuttle rendezvous trajectory algorithms.
Pile, Victoria; Lau, Jennifer Y F; Topor, Marta; Hedderly, Tammy; Robinson, Sally
2018-05-18
Aberrant interoceptive accuracy could contribute to the co-occurrence of anxiety and premonitory urge in chronic tic disorders (CTD). If it can be manipulated through intervention, it would offer a transdiagnostic treatment target for tics and anxiety. Interoceptive accuracy was first assessed consistent with previous protocols and then re-assessed following an instruction attempting to experimentally enhance awareness. The CTD group demonstrated lower interoceptive accuracy than controls but, importantly, this group difference was no longer significant following instruction. In the CTD group, better interoceptive accuracy was associated with higher anxiety and lower quality of life, but not with premonitory urge. Aberrant interoceptive accuracy may represent an underlying trait in CTD that can be manipulated, and relates to anxiety and quality of life.
Making and Measuring a Model of a Salt Marsh
ERIC Educational Resources Information Center
Fogleman, Tara; Curran, Mary Carla
2007-01-01
Students are often confused by the difference between the terms "accuracy" and "precision." In the following activities, students explore the definitions of accuracy and precision while learning about salt march ecology and the methods used by scientists to assess salt marsh health. The activities also address the concept that the ocean supports a…
Grammatical Accuracy and Learner Autonomy in Advanced Writing
ERIC Educational Resources Information Center
Vickers, Caroline H.; Ene, Estela
2006-01-01
This paper aims to explore advanced ESL learners' ability to make improvements in grammatical accuracy by autonomously noticing and correcting their own grammatical errors. In the recent literature in SLA, it is suggested that classroom tasks can be used to foster autonomous language learning habits (cf. Dam 2001). Therefore, it is important to…
What Determines GCSE Marking Accuracy? An Exploration of Expertise among Maths and Physics Markers
ERIC Educational Resources Information Center
Suto, W. M. Irenka; Nadas, Rita
2008-01-01
Examination marking utilises a variety of cognitive processes, and from a psychological perspective, the demands that different questions place on markers will vary considerably. To what extent does marking accuracy vary among markers with differing backgrounds and experiences? More fundamentally, what makes some questions harder to mark…
Engel, Holger; Huang, Jung Ju; Tsao, Chung Kan; Lin, Chia-Yu; Chou, Pan-Yu; Brey, Eric M; Henry, Steven L; Cheng, Ming Huei
2011-11-01
This prospective study was designed to compare the accuracy rate between remote smartphone photographic assessments and in-person examinations for free flap monitoring. One hundred and three consecutive free flaps were monitored with in-person examinations and assessed remotely by three surgeons (Team A) via photographs transmitted over smartphone. Four other surgeons used the traditional in-person examinations as Team B. The response time to re-exploration was defined as the interval between when a flap was evaluated as compromised by the nurse/house officer and when the decision was made for re-exploration. The accuracy rate was 98.7% and 94.2% for in-person and smartphone photographic assessments, respectively. The response time of 8 ± 3 min in Team A was statistically shorter than the 180 ± 104 min in Team B (P = 0.01 by the Mann-Whitney test). The remote smartphone photography assessment has a comparable accuracy rate and shorter response time compared with in-person examination for free flap monitoring. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wu, Binlin; Smith, Jason; Zhang, Lin; Gao, Xin; Alfano, Robert R.
2018-02-01
Worldwide breast cancer incidence has increased by more than twenty percent in the past decade. It is also known that in that time, mortality due to the affliction has increased by fourteen percent. Using optical-based diagnostic techniques, such as Raman spectroscopy, has been explored in order to increase diagnostic accuracy in a more objective way along with significantly decreasing diagnostic wait-times. In this study, Raman spectroscopy with 532-nm excitation was used in order to incite resonance effects to enhance Stokes Raman scattering from unique biomolecular vibrational modes. Seventy-two Raman spectra (41 cancerous, 31 normal) were collected from nine breast tissue samples by performing a ten-spectra average using a 500-ms acquisition time at each acquisition location. The raw spectral data was subsequently prepared for analysis with background correction and normalization. The spectral data in the Raman Shift range of 750- 2000 cm-1 was used for analysis since the detector has highest sensitivity around in this range. The matrix decomposition technique nonnegative matrix factorization (NMF) was then performed on this processed data. The resulting leave-oneout cross-validation using two selective feature components resulted in sensitivity, specificity and accuracy of 92.6%, 100% and 96.0% respectively. The performance of NMF was also compared to that using principal component analysis (PCA), and NMF was shown be to be superior to PCA in this study. This study shows that coupling the resonance Raman spectroscopy technique with subsequent NMF decomposition method shows potential for high characterization accuracy in breast cancer detection.
Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano; Llorens, Roberto
2018-06-01
The latest generation of head-mounted displays (HMDs) provides built-in head tracking, which enables estimating position in a room-size setting. This feature allows users to explore, navigate, and move within real-size virtual environments, such as kitchens, supermarket aisles, or streets. Previously, these actions were commonly facilitated by external peripherals and interaction metaphors. The objective of this study was to compare the Oculus Rift and the HTC Vive in terms of the working range of the head tracking and the working area, accuracy, and jitter in a room-size environment, and to determine their feasibility for serious games, rehabilitation, and health-related applications. The position of the HMDs was registered in a 10 × 10 grid covering an area of 25 m 2 at sitting (1.3 m) and standing (1.7 m) heights. Accuracy and jitter were estimated from positional data. The working range was estimated by moving the HMDs away from the cameras until no data were obtained. The HTC Vive provided a working area (24.87 m 2 ) twice as large as that of the Oculus Rift. Both devices showed excellent and comparable performance at sitting height (accuracy up to 1 cm and jitter <0.35 mm), and the HTC Vive presented worse but still excellent accuracy and jitter at standing height (accuracy up to 1.5 cm and jitter <0.5 mm). The HTC Vive presented a larger working range (7 m) than did the Oculus Rift (4.25 m). Our results support the use of these devices for real navigation, exploration, exergaming, and motor rehabilitation in virtual reality environments.
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
Genotype imputation in a coalescent model with infinitely-many-sites mutation
Huang, Lucy; Buzbas, Erkan O.; Rosenberg, Noah A.
2012-01-01
Empirical studies have identified population-genetic factors as important determinants of the properties of genotype-imputation accuracy in imputation-based disease association studies. Here, we develop a simple coalescent model of three sequences that we use to explore the theoretical basis for the influence of these factors on genotype-imputation accuracy, under the assumption of infinitely-many-sites mutation. Employing a demographic model in which two populations diverged at a given time in the past, we derive the approximate expectation and variance of imputation accuracy in a study sequence sampled from one of the two populations, choosing between two reference sequences, one sampled from the same population as the study sequence and the other sampled from the other population. We show that under this model, imputation accuracy—as measured by the proportion of polymorphic sites that are imputed correctly in the study sequence—increases in expectation with the mutation rate, the proportion of the markers in a chromosomal region that are genotyped, and the time to divergence between the study and reference populations. Each of these effects derives largely from an increase in information available for determining the reference sequence that is genetically most similar to the sequence targeted for imputation. We analyze as a function of divergence time the expected gain in imputation accuracy in the target using a reference sequence from the same population as the target rather than from the other population. Together with a growing body of empirical investigations of genotype imputation in diverse human populations, our modeling framework lays a foundation for extending imputation techniques to novel populations that have not yet been extensively examined. PMID:23079542
The effect of input data transformations on object-based image analysis
LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.
2011-01-01
The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829
Parsimonious data: How a single Facebook like predicts voting behavior in multiparty systems.
Kristensen, Jakob Bæk; Albrechtsen, Thomas; Dahl-Nielsen, Emil; Jensen, Michael; Skovrind, Magnus; Bornakke, Tobias
2017-01-01
This study shows how liking politicians' public Facebook posts can be used as an accurate measure for predicting present-day voter intention in a multiparty system. We highlight that a few, but selective digital traces produce prediction accuracies that are on par or even greater than most current approaches based upon bigger and broader datasets. Combining the online and offline, we connect a subsample of surveyed respondents to their public Facebook activity and apply machine learning classifiers to explore the link between their political liking behaviour and actual voting intention. Through this work, we show that even a single selective Facebook like can reveal as much about political voter intention as hundreds of heterogeneous likes. Further, by including the entire political like history of the respondents, our model reaches prediction accuracies above previous multiparty studies (60-70%). The main contribution of this paper is to show how public like-activity on Facebook allows political profiling of individual users in a multiparty system with accuracies above previous studies. Beside increased accuracies, the paper shows how such parsimonious measures allows us to generalize our findings to the entire population of a country and even across national borders, to other political multiparty systems. The approach in this study relies on data that are publicly available, and the simple setup we propose can with some limitations, be generalized to millions of users in other multiparty systems.
Task-Based Variability in Children's Singing Accuracy
ERIC Educational Resources Information Center
Nichols, Bryan E.
2016-01-01
The purpose of this study was to explore the effect of task demands on children's singing accuracy. A 2 × 4 factorial design was used to examine the performance of fourth-grade children (N = 120) in solo and doubled response conditions. Each child sang four task types: single pitch, interval, pattern, and the song "Jingle Bells." The…
Accuracy of Tracking Forest Machines with GPS
M.W. Veal; S.E. Taylor; T.P. McDonald; D.K. McLemore; M.R. Dunn
2001-01-01
This paper describes the results of a study that measured the accuracy of using GPS to track movement offorest machines. Two different commercially available GPS receivers (Trimble ProXR and GeoExplorer II) were used to track wheeled skidders under three different canopy conditions at two different vehicle speeds. Dynamic GPS data were compared to position data...
ERIC Educational Resources Information Center
Doe, Sue R.; Gingerich, Karla J.; Richards, Tracy L.
2013-01-01
This study explored graduate teaching assistant (GTA) grading on 480 papers across two writing assignments as integrated into large Introductory Psychology courses. We measured GTA accuracy, consistency, and commenting (feedback) quality. Results indicate that GTA graders improved, although unevenly, in accuracy and consistency from Time 1 to 2…
ERIC Educational Resources Information Center
Tretter, Thomas R.; Jones, M. Gail; Minogue, James
2006-01-01
The use of unifying themes that span the various branches of science is recommended to enhance curricular coherence in science instruction. Conceptions of spatial scale are one such unifying theme. This research explored the accuracy of spatial scale conceptions of science phenomena across a spectrum of 215 participants: fifth grade, seventh…
ERIC Educational Resources Information Center
Saglam, Murat
2015-01-01
This study explored the relationship between accuracy of and confidence in performance of 114 prospective primary school teachers in answering diagnostic questions on potential difference in parallel electric circuits. The participants were required to indicate their confidence in their answers for each question. Bias and calibration indices were…
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Interlaboratory comparison measurements of aspheres
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Fortmeier, I.; Stavridis, M.; Asfour, J.; Berger, G.; Bergmann, R. B.; Beutler, A.; Blümel, T.; Klawitter, H.; Kubo, K.; Liebl, J.; Löffler, F.; Meeß, R.; Pruss, C.; Ramm, D.; Sandner, M.; Schneider, G.; Wendel, M.; Widdershoven, I.; Schulz, M.; Elster, C.
2018-05-01
The need for high-quality aspheres is rapidly growing, necessitating increased accuracy in their measurement. A reliable uncertainty assessment of asphere form measurement techniques is difficult due to their complexity. In order to explore the accuracy of current asphere form measurement techniques, an interlaboratory comparison was carried out in which four aspheres were measured by eight laboratories using tactile measurements, optical point measurements, and optical areal measurements. Altogether, 12 different devices were employed. The measurement results were analysed after subtracting the design topography and subsequently a best-fit sphere from the measurements. The surface reduced in this way was compared to a reference topography that was obtained by taking the pointwise median across the ensemble of reduced topographies on a 1000 × 1000 Cartesian grid. The deviations of the reduced topographies from the reference topography were analysed in terms of several characteristics including peak-to-valley and root-mean-square deviations. Root-mean-square deviations of the reduced topographies from the reference topographies were found to be on the order of some tens of nanometres up to 89 nm, with most of the deviations being smaller than 20 nm. Our results give an indication of the accuracy that can currently be expected in form measurements of aspheres.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces
Sellers, Eric W.; Wang, Xingyu
2013-01-01
Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331
Ma, Zhenling; Wu, Xiaoliang; Yan, Li; Xu, Zhenliang
2017-01-26
With the development of space technology and the performance of remote sensors, high-resolution satellites are continuously launched by countries around the world. Due to high efficiency, large coverage and not being limited by the spatial regulation, satellite imagery becomes one of the important means to acquire geospatial information. This paper explores geometric processing using satellite imagery without ground control points (GCPs). The outcome of spatial triangulation is introduced for geo-positioning as repeated observation. Results from combining block adjustment with non-oriented new images indicate the feasibility of geometric positioning with the repeated observation. GCPs are a must when high accuracy is demanded in conventional block adjustment; the accuracy of direct georeferencing with repeated observation without GCPs is superior to conventional forward intersection and even approximate to conventional block adjustment with GCPs. The conclusion is drawn that taking the existing oriented imagery as repeated observation enhances the effective utilization of previous spatial triangulation achievement, which makes the breakthrough for repeated observation to improve accuracy by increasing the base-height ratio and redundant observation. Georeferencing tests using data from multiple sensors and platforms with the repeated observation will be carried out in the follow-up research.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Identification of facilitators and barriers to residents' use of a clinical reasoning tool.
DiNardo, Deborah; Tilstra, Sarah; McNeil, Melissa; Follansbee, William; Zimmer, Shanta; Farris, Coreen; Barnato, Amber E
2018-03-28
While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability. We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool. Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias. A clinical reasoning tool improved residents' diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.
Accuracy of aging ducks in the U.S. Fish and Wildlife Service Waterfowl Parts Collection Survey
Pearse, Aaron T.; Johnson, Douglas H.; Richkus, Kenneth D.; Rohwer, Frank C.; Cox, Robert R.; Padding, Paul I.
2014-01-01
The U.S. Fish and Wildlife Service conducts an annual Waterfowl Parts Collection Survey to estimate composition of harvested waterfowl by species, sex, and age (i.e., juv or ad). The survey relies on interpretation of duck wings by a group of experienced biologists at annual meetings (hereafter, flyway wingbees). Our objectives were to estimate accuracy of age assignment at flyway wingbees and to explore how accuracy rates may influence bias of age composition estimates. We used banded mallards (Anas platyrhynchos; n = 791), wood ducks (Aix sponsa; n = 242), and blue-winged teal (Anas discors; n = 39) harvested and donated by hunters as our source of birds used in accuracy assessments. We sent wings of donated birds to wingbees after the 2002–2003 and 2003–2004 hunting seasons and compared species, sex, and age determinations made at wingbees with our assessments based on internal and external examination of birds and corresponding banding records. Determinations of species and sex of mallards, wood ducks, and blue-winged teal were accurate (>99%). Accuracy of aging adult mallards increased with harvest date, whereas accuracy of aging juvenile male wood ducks and juvenile blue-winged teal decreased with harvest date. Accuracy rates were highest (96% and 95%) for adult and juvenile mallards, moderate for adult and juvenile wood ducks (92% and 92%), and lowest for adult and juvenile blue-winged teal (84% and 82%). We used these estimates to calculate bias for all possible age compositions (0–100% proportion juv) and determined the range of age compositions estimated with acceptable levels of bias. Comparing these ranges with age compositions estimated from Parts Collection Surveys conducted from 1961 to 2008 revealed that mallard and wood duck age compositions were estimated with insignificant levels of bias in all national surveys. However, 69% of age compositions for blue-winged teal were estimated with an unacceptable level of bias. The low preliminary accuracy rates of aging blue-winged teal based on our limited sample suggest a more extensive accuracy assessment study may be considered for interpreting age compositions of this species.
Reaction time and accuracy in individuals with aphasia during auditory vigilance tasks.
Laures, Jacqueline S
2005-11-01
Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory vigilance tasks. Findings indicate that the aphasic group was less accurate during both tasks than the control group, but was not slower in their accurate responses. Further examination of the data revealed variability in the aphasic participants' RT contributing to the lower accuracy scores.
Acquisition of Codas in Spanish as a First Language: The Role of Accuracy, Markedness and Frequency
ERIC Educational Resources Information Center
Polo, Nuria
2018-01-01
Studies on the acquisition of Spanish as a first language do not agree on the patterns and factors relevant for coda development. In order to shed light on the questions involved, a longitudinal study of coda development in Northern European Spanish was carried out to explore the relationship between accuracy, markedness and frequency. The study…
ERIC Educational Resources Information Center
Borgna, Georgianna; Convertino, Carol; Marschark, Marc; Morrison, Carolyn; Rizzolo, Kathleen
2011-01-01
Four experiments, each building on the results of the previous ones, explored the effects of several manipulations on learning and the accuracy of metacognitive judgments among deaf and hard-of-hearing (DHH) students. Experiment 1 examined learning and metacognitive accuracy from classroom lectures with or without prior "scaffolding" in the form…
Evidence on the Effectiveness of Comprehensive Error Correction in Second Language Writing
ERIC Educational Resources Information Center
Van Beuningen, Catherine G.; De Jong, Nivja H.; Kuiken, Folkert
2012-01-01
This study investigated the effect of direct and indirect comprehensive corrective feedback (CF) on second language (L2) learners' written accuracy (N = 268). The study set out to explore the value of CF as a revising tool as well as its capacity to support long-term accuracy development. In addition, we tested Truscott's (e.g., 2001, 2007) claims…
ERIC Educational Resources Information Center
Majetic, Cassie; Pellegrino, Catherine
2014-01-01
The skill set associated with lifelong scientific literacy often includes the ability to decode the content and accuracy of scientific research as presented in the media. However, students often find decoding science in the media difficult, due to limited content knowledge and shifting definitions of accuracy. Faculty have developed a variety of…
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
Testing the exclusivity effect in location memory.
Clark, Daniel P A; Dunn, Andrew K; Baguley, Thom
2013-01-01
There is growing literature exploring the possibility of parallel retrieval of location memories, although this literature focuses primarily on the speed of retrieval with little attention to the accuracy of location memory recall. Baguley, Lansdale, Lines, and Parkin (2006) found that when a person has two or more memories for an object's location, their recall accuracy suggests that only one representation can be retrieved at a time (exclusivity). This finding is counterintuitive given evidence of non-exclusive recall in the wider memory literature. The current experiment explored the exclusivity effect further and aimed to promote an alternative outcome (i.e., independence or superadditivity) by encouraging the participants to combine multiple representations of space at encoding or retrieval. This was encouraged by using anchor (points of reference) labels that could be combined to form a single strongly associated combination. It was hypothesised that the ability to combine the anchor labels would allow the two representations to be retrieved concurrently, generating higher levels of recall accuracy. The results demonstrate further support for the exclusivity hypothesis, showing no significant improvement in recall accuracy when there are multiple representations of a target object's location as compared to a single representation.
Pan, Jianjun
2018-01-01
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073
CET exSim: mineral exploration experience via simulation
NASA Astrophysics Data System (ADS)
Wong, Jason C.; Holden, Eun-Jung; Kovesi, Peter; McCuaig, T. Campbell; Hronsky, Jon
2013-08-01
Undercover mineral exploration is a challenging task as it requires understanding of subsurface geology by relying heavily on remotely sensed (i.e. geophysical) data. Cost-effective exploration is essential in order to increase the chance of success using finite budgets. This requires effective decision-making in both the process of selecting the optimum data collection methods and in the process of achieving accuracy during subsequent interpretation. Traditionally, developing the skills, behaviour and practices of exploration decision-making requires many years of experience through working on exploration projects under various geological settings, commodities and levels of available resources. This implies long periods of sub-optimal exploration decision-making, before the necessary experience has been successfully obtained. To address this critical industry issue, our ongoing research focuses on the development of the unique and novel e-learning environment, exSim, which simulates exploration scenarios where users can test their strategies and learn the consequences of their choices. This simulator provides an engaging platform for self-learning and experimentation in exploration decision strategies, providing a means to build experience more effectively. The exSim environment also provides a unique platform on which numerous scenarios and situations (e.g. deposit styles) can be simulated, potentially allowing the user to become virtually familiarised with a broader scope of exploration practices. Harnessing the power of computer simulation, visualisation and an intuitive graphical user interface, the simulator provides a way to assess the user's exploration decisions and subsequent interpretations. In this paper, we present the prototype functionalities in exSim including: simulation of geophysical surveys, follow-up drill testing and interpretation assistive tools.
The Software Design for the Wide-Field Infrared Explorer Attitude Control System
NASA Technical Reports Server (NTRS)
Anderson, Mark O.; Barnes, Kenneth C.; Melhorn, Charles M.; Phillips, Tom
1998-01-01
The Wide-Field Infrared Explorer (WIRE), currently scheduled for launch in September 1998, is the fifth of five spacecraft in the NASA/Goddard Small Explorer (SMEX) series. This paper presents the design of WIRE's Attitude Control System flight software (ACS FSW). WIRE is a momentum-biased, three-axis stabilized stellar pointer which provides high-accuracy pointing and autonomous acquisition for eight to ten stellar targets per orbit. WIRE's short mission life and limited cryogen supply motivate requirements for Sun and Earth avoidance constraints which are designed to prevent catastrophic instrument damage and to minimize the heat load on the cryostat. The FSW implements autonomous fault detection and handling (FDH) to enforce these instrument constraints and to perform several other checks which insure the safety of the spacecraft. The ACS FSW implements modules for sensor data processing, attitude determination, attitude control, guide star acquisition, actuator command generation, command/telemetry processing, and FDH. These software components are integrated with a hierarchical control mode managing module that dictates which software components are currently active. The lowest mode in the hierarchy is the 'safest' one, in the sense that it utilizes a minimal complement of sensors and actuators to keep the spacecraft in a stable configuration (power and pointing constraints are maintained). As higher modes in the hierarchy are achieved, the various software functions are activated by the mode manager, and an increasing level of attitude control accuracy is provided. If FDH detects a constraint violation or other anomaly, it triggers a safing transition to a lower control mode. The WIRE ACS FSW satisfies all target acquisition and pointing accuracy requirements, enforces all pointing constraints, provides the ground with a simple means for reconfiguring the system via table load, and meets all the demands of its real-time embedded environment (16 MHz Intel 80386 processor with 80387 coprocessor running under the VRTX operating system). The mode manager organizes and controls all the software modules used to accomplish these goals, and in particular, the FDH module is tightly coupled with the mode manager.
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhang, Yan-Cong; Lin, Kui
2015-01-01
Overlapping genes (OGs) represent one type of widespread genomic feature in bacterial genomes and have been used as rare genomic markers in phylogeny inference of closely related bacterial species. However, the inference may experience a decrease in performance for phylogenomic analysis of too closely or too distantly related genomes. Another drawback of OGs as phylogenetic markers is that they usually take little account of the effects of genomic rearrangement on the similarity estimation, such as intra-chromosome/genome translocations, horizontal gene transfer, and gene losses. To explore such effects on the accuracy of phylogeny reconstruction, we combine phylogenetic signals of OGs with collinear genomic regions, here called locally collinear blocks (LCBs). By putting these together, we refine our previous metric of pairwise similarity between two closely related bacterial genomes. As a case study, we used this new method to reconstruct the phylogenies of 88 Enterobacteriale genomes of the class Gammaproteobacteria. Our results demonstrated that the topological accuracy of the inferred phylogeny was improved when both OGs and LCBs were simultaneously considered, suggesting that combining these two phylogenetic markers may reduce, to some extent, the influence of gene loss on phylogeny inference. Such phylogenomic studies, we believe, will help us to explore a more effective approach to increasing the robustness of phylogeny reconstruction of closely related bacterial organisms. PMID:26715828
Comparison of two on-orbit attitude sensor alignment methods
NASA Technical Reports Server (NTRS)
Krack, Kenneth; Lambertson, Michael; Markley, F. Landis
1990-01-01
Compared here are two methods of on-orbit alignment of vector attitude sensors. The first method uses the angular difference between simultaneous measurements from two or more sensors. These angles are compared to the angular differences between the respective reference positions of the sensed objects. The alignments of the sensors are adjusted to minimize the difference between the two sets of angles. In the second method, the sensor alignment is part of a state vector that includes the attitude. The alignments are adjusted along with the attitude to minimize all observation residuals. It is shown that the latter method can result in much less alignment uncertainty when gyroscopes are used for attitude propagation during the alignment estimation. The additional information for this increased accuracy comes from knowledge of relative attitude obtained from the spacecraft gyroscopes. The theoretical calculations of this difference in accuracy are presented. Also presented are numerical estimates of the alignment uncertainties of the fixed-head star trackers on the Extreme Ultraviolet Explorer spacecraft using both methods.
Age differences in accuracy and choosing in eyewitness identification and face recognition.
Searcy, J H; Bartlett, J C; Memon, A
1999-05-01
Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.
ERIC Educational Resources Information Center
Hintze, John M.; Ryan, Amanda L.; Stoner, Gary
2003-01-01
The purpose of this study was to (a) examine the concurrent validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) with the Comprehensive Test of Phonological Processing (CTOPP), and (b) explore the diagnostic accuracy of the DIBELS in predicting CTOPP performance using suggested and alternative cut-scores. Eighty-six students…
ERIC Educational Resources Information Center
Farrokhi, Farahman; Sattarpour, Simin
2012-01-01
The present article reports the findings of a study that explored(1) whether direct written corrective feedback (CF) can help high-proficient L2 learners, who has already achieved a rather high level of accuracy in English, improve in the accurate use of two functions of English articles (the use of "a" for first mention and…
ERIC Educational Resources Information Center
Schiff, Rachel; Katzir, Tami; Shoshan, Noa
2013-01-01
The present study examined the effects of orthographic transparency on reading ability of children with dyslexia in two Hebrew scripts. The study explored the reading accuracy and speed of vowelized and unvowelized Hebrew words of fourth-grade children with dyslexia. A comparison was made to typically developing readers of two age groups: a group…
Investigating effects of communications modulation technique on targeting performance
NASA Astrophysics Data System (ADS)
Blasch, Erik; Eusebio, Gerald; Huling, Edward
2006-05-01
One of the key challenges facing the global war on terrorism (GWOT) and urban operations is the increased need for rapid and diverse information from distributed sources. For users to get adequate information on target types and movements, they would need reliable data. In order to facilitate reliable computational intelligence, we seek to explore the communication modulation tradeoffs affecting information distribution and accumulation. In this analysis, we explore the modulation techniques of Orthogonal Frequency Division Multiplexing (OFDM), Direct Sequence Spread Spectrum (DSSS), and statistical time-division multiple access (TDMA) as a function of the bit error rate and jitter that affect targeting performance. In the analysis, we simulate a Link 16 with a simple bandpass frequency shift keying (PSK) technique using different Signal-to-Noise ratios. The communications transfer delay and accuracy tradeoffs are assessed as to the effects incurred in targeting performance.
Positioning for the Chang'E-3 lander and rover using Earth-based Observations
NASA Astrophysics Data System (ADS)
Li, P.; Huang, Y.; Hu, X.; Shengqi, C.
2016-12-01
The Chinese first lunar lander, Chang'E-3, performed a lunar soft-landing on 14 Dec, 2013. Precise positioning for the lander and rover was the most important precondition and guarantee for a successful lunar surface exploration. In this study, first, the tracking system, measurement models and positioning method are discussed in detail. Second, the location of the CE-3 lander was determined: 44.1206°N, -19.5124°E, -2632 m (altitude was relative to the assumed lunar surface with a height of 1737.4 km), and the analysis showed the VLBI Very Long Base Interferometry data were able to significantly improve the positioning accuracy. Furthermore, the positioning error was evaluated in various ways; the result was better than 50 m. Finally, the relative positioning of the rover and lander using earth-based observations was studied and compared with the optical positioning method using photographs taken by the lander and rover. The method applied in this study was not limited by the visible range of the lander, and the relative positioning accuracy did not decrease as the distance between the lander and rover increased. The results indicated that under the current tracking and measuring conditions, the relative positioning accuracy was about 100 m using the same beam VLBI group delay data with ns nanosecond level noise. Furthermore, using the same beam VLBI phase delay data with ps picosecond level noise it was possible to significantly improve the relative positioning accuracy to the order of 1 m.
Survey methods for assessing land cover map accuracy
Nusser, S.M.; Klaas, E.E.
2003-01-01
The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.
Parsimonious data: How a single Facebook like predicts voting behavior in multiparty systems
Albrechtsen, Thomas; Dahl-Nielsen, Emil; Jensen, Michael; Skovrind, Magnus
2017-01-01
This study shows how liking politicians’ public Facebook posts can be used as an accurate measure for predicting present-day voter intention in a multiparty system. We highlight that a few, but selective digital traces produce prediction accuracies that are on par or even greater than most current approaches based upon bigger and broader datasets. Combining the online and offline, we connect a subsample of surveyed respondents to their public Facebook activity and apply machine learning classifiers to explore the link between their political liking behaviour and actual voting intention. Through this work, we show that even a single selective Facebook like can reveal as much about political voter intention as hundreds of heterogeneous likes. Further, by including the entire political like history of the respondents, our model reaches prediction accuracies above previous multiparty studies (60–70%). The main contribution of this paper is to show how public like-activity on Facebook allows political profiling of individual users in a multiparty system with accuracies above previous studies. Beside increased accuracies, the paper shows how such parsimonious measures allows us to generalize our findings to the entire population of a country and even across national borders, to other political multiparty systems. The approach in this study relies on data that are publicly available, and the simple setup we propose can with some limitations, be generalized to millions of users in other multiparty systems. PMID:28931023
NASA Astrophysics Data System (ADS)
Hammann, Mark Gregory
The fusion of electro-optical (EO) multi-spectral satellite imagery with Synthetic Aperture Radar (SAR) data was explored with the working hypothesis that the addition of multi-band SAR will increase the land-cover (LC) classification accuracy compared to EO alone. Three satellite sources for SAR imagery were used: X-band from TerraSAR-X, C-band from RADARSAT-2, and L-band from PALSAR. Images from the RapidEye satellites were the source of the EO imagery. Imagery from the GeoEye-1 and WorldView-2 satellites aided the selection of ground truth. Three study areas were chosen: Wad Medani, Sudan; Campinas, Brazil; and Fresno- Kings Counties, USA. EO imagery were radiometrically calibrated, atmospherically compensated, orthorectifed, co-registered, and clipped to a common area of interest (AOI). SAR imagery were radiometrically calibrated, and geometrically corrected for terrain and incidence angle by converting to ground range and Sigma Naught (?0). The original SAR HH data were included in the fused image stack after despeckling with a 3x3 Enhanced Lee filter. The variance and Gray-Level-Co-occurrence Matrix (GLCM) texture measures of contrast, entropy, and correlation were derived from the non-despeckled SAR HH bands. Data fusion was done with layer stacking and all data were resampled to a common spatial resolution. The Support Vector Machine (SVM) decision rule was used for the supervised classifications. Similar LC classes were identified and tested for each study area. For Wad Medani, nine classes were tested: low and medium intensity urban, sparse forest, water, barren ground, and four agriculture classes (fallow, bare agricultural ground, green crops, and orchards). For Campinas, Brazil, five generic classes were tested: urban, agriculture, forest, water, and barren ground. For the Fresno-Kings Counties location 11 classes were studied: three generic classes (urban, water, barren land), and eight specific crops. In all cases the addition of SAR to EO resulted in higher overall classification accuracies. In many cases using more than a single SAR band also improved the classification accuracy. There was no single best SAR band for all cases; for specific study areas or LC classes, different SAR bands were better. For Wad Medani, the overall accuracy increased nearly 25% over EO by using all three SAR bands and GLCM texture. For Campinas, the improvement over EO was 4.3%; the large areas of vegetation were classified by EO with good accuracy. At Fresno-Kings Counties, EO+SAR fusion improved the overall classification accuracy by 7%. For times or regions where EO is not available due to extended cloud cover, classification with SAR is often the only option; note that SAR alone typically results in lower classification accuracies than when using EO or EO-SAR fusion. Fusion of EO and SAR was especially important to improve the separability of orchards from other crops, and separating urban areas with buildings from bare soil; those classes are difficult to accurately separate with EO. The outcome of this dissertation contributes to the understanding of the benefits of combining data from EO imagery with different SAR bands and SAR derived texture data to identify different LC classes. In times of increased public and private budget constraints and industry consolidation, this dissertation provides insight as to which band packages could be most useful for increased accuracy in LC classification.
Devakumar, Delan; Grijalva-Eternod, Carlos S; Roberts, Sebastian; Chaube, Shiva Shankar; Saville, Naomi M; Manandhar, Dharma S; Costello, Anthony; Osrin, David; Wells, Jonathan C K
2015-01-01
Background. Body composition is important as a marker of both current and future health. Bioelectrical impedance (BIA) is a simple and accurate method for estimating body composition, but requires population-specific calibration equations. Objectives. (1) To generate population specific calibration equations to predict lean mass (LM) from BIA in Nepalese children aged 7-9 years. (2) To explore methodological changes that may extend the range and improve accuracy. Methods. BIA measurements were obtained from 102 Nepalese children (52 girls) using the Tanita BC-418. Isotope dilution with deuterium oxide was used to measure total body water and to estimate LM. Prediction equations for estimating LM from BIA data were developed using linear regression, and estimates were compared with those obtained from the Tanita system. We assessed the effects of flexing the arms of children to extend the range of coverage towards lower weights. We also estimated potential error if the number of children included in the study was reduced. Findings. Prediction equations were generated, incorporating height, impedance index, weight and sex as predictors (R (2) 93%). The Tanita system tended to under-estimate LM, with a mean error of 2.2%, but extending up to 25.8%. Flexing the arms to 90° increased the lower weight range, but produced a small error that was not significant when applied to children <16 kg (p 0.42). Reducing the number of children increased the error at the tails of the weight distribution. Conclusions. Population-specific isotope calibration of BIA for Nepalese children has high accuracy. Arm position is important and can be used to extend the range of low weight covered. Smaller samples reduce resource requirements, but leads to large errors at the tails of the weight distribution.
Increasing Deception Detection Accuracy with Strategic Questioning
ERIC Educational Resources Information Center
Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.
2010-01-01
One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…
Remote Sensing of Clouds for Solar Forecasting Applications
NASA Astrophysics Data System (ADS)
Mejia, Felipe
A method for retrieving cloud optical depth (tauc) using a UCSD developed ground- based Sky Imager (USI) is presented. The Radiance Red-Blue Ratio (RRBR) method is motivated from the analysis of simulated images of various tauc produced by a Radiative Transfer Model (RTM). From these images the basic parameters affecting the radiance and RBR of a pixel are identified as the solar zenith angle (SZA), tau c , solar pixel an- gle/scattering angle (SPA), and pixel zenith angle/view angle (PZA). The effects of these parameters are described and the functions for radiance, Ilambda (tau c ,SZA,SPA,PZA) , and the red-blue ratio, RBR(tauc ,SZA,SPA,PZA) , are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for tau c , where RBR increases with tauc up to about tauc = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured Imeaslambda (SPA,PZA) , in addition to RBRmeas (SPA,PZA ) to obtain a unique solution for tauc . The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement program (ARM) site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min [ MH96a ] method for overcast skies. tau c values ranged from 0-80 with values over 80 being capped and registered as 80. A tauc RMSE of 2.5 between the Min method [ MH96b ] and the USI are observed. The MWR and USI have an RMSE of 2.2 which is well within the uncertainty of the MWR. The procedure developed here provides a foundation to test and develop other cloud detection algorithms. Using the RRBR tauc estimate as an input we then explore the potential of using tomographic techniques for 3-D cloud reconstruction. The Algebraic Reconstruction Technique (ART) is applied to optical depth maps from sky images to reconstruct 3-D cloud extinction coefficients. Reconstruction accuracy is explored for different products, including surface irradiance, extinction coefficients and Liquid Water Path, as a function of the number of available sky imagers (SIs) and setup distance. Increasing the number of cameras improves the accuracy of the 3-D reconstruction: For surface irradiance, the error decreases significantly up to four imagers at which point the improvements become marginal while k error continues to decrease with more cameras. The ideal distance between imagers was also explored: For a cloud height of 1 km, increasing distance up to 3 km (the domain length) improved the 3-D reconstruction for surface irradiance, while k error continued to decrease with increasing decrease. An iterative reconstruction technique was also used to improve the results of the ART by minimizing the error between input images and reconstructed simulations. For the best case of a nine imager deployment, the ART and iterative method resulted in 53.4% and 33.6% mean average error (MAE) for the extinction coefficients, respectively. The tomographic methods were then tested on real world test cases in the Uni- versity of California San Diego's (UCSD) solar testbed. Five UCSD sky imagers (USI) were installed across the testbed based on the best performing distances in simulations. Topographic obstruction is explored as a source of error by analyzing the increased error with obstruction in the field of view of the horizon. As more of the horizon is obstructed the error increases. If at least a field of view of 70° is available for the camera the accuracy is within 2% of the full field of view. Errors caused by stray light are also explored by removing the circumsolar region from images and comparing the cloud reconstruction to a full image. Removing less than 30% of the circumsolar region image and GHI errors were within 0.2% of the full image while errors in k increased 1%. Removing more than 30° around the sun resulted in inaccurate cloud reconstruction. Using four of the five USI a 3D cloud is reconstructed and compared to the fifth camera. The image of the fifth camera (excluded from the reconstruction) was then simulated and found to have a 22.9% error compared to the ground truth.
Competency-based assessment in surgeon-performed head and neck ultrasonography: A validity study.
Todsen, Tobias; Melchiors, Jacob; Charabi, Birgitte; Henriksen, Birthe; Ringsted, Charlotte; Konge, Lars; von Buchwald, Christian
2018-06-01
Head and neck ultrasonography (HNUS) increasingly is used as a point-of-care diagnostic tool by otolaryngologists. However, ultrasonography (US) is a very operator-dependent image modality. Hence, this study aimed to explore the diagnostic accuracy of surgeon-performed HNUS and to establish validity evidence for an objective structured assessment of ultrasound skills (OSAUS) used for competency-based assessment. A prospective experimental study. Six otolaryngologists and 11 US novices were included in a standardized test setup for which they had to perform focused HNUS of eight patients suspected for different head and neck lesions. Their diagnostic accuracy was calculated based on the US reports, and two blinded raters assessed the video-recorded US performance using the OSAUS scale. The otolaryngologists obtained a high diagnostic accuracy on 88% (range 63%-100%) compared to the US novices on 38% (range 0-63%); P < 0.001. The OSAUS score demonstrated good inter-case reliability (0.85) and inter-rater reliability (0.76), and significant discrimination between otolaryngologist and US novices; P < 0.001. A strong correlation between the OSAUS score and the diagnostic accuracy was found (Spearman's ρ, 0.85; P < P 0.001), and a pass/fail score was established at 2.8. Strong validity evidence supported the use of the OSAUS scale to assess HNUS competence with good reliability, significant discrimination between US competence levels, and a strong correlation of assessment score to diagnostic accuracy. An OSAUS pass/fail score was established and could be used for competence-based assessment in surgeon-performed HNUS. NA. Laryngoscope, 128:1346-1352, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-11-22
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k -nearest neighbor ( k -NN) method produced the greatest accuracy. A geostatistically-weighted k -NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer's and user's accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas.
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping
NASA Astrophysics Data System (ADS)
Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta
2012-10-01
A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.
Researches on the Orbit Determination and Positioning of the Chinese Lunar Exploration Program
NASA Astrophysics Data System (ADS)
Li, P. J.
2015-07-01
This dissertation studies the precise orbit determination (POD) and positioning of the Chinese lunar exploration spacecraft, emphasizing the variety of VLBI (very long baseline interferometry) technologies applied for the deep-space exploration, and their contributions to the methods and accuracies of the precise orbit determination and positioning. In summary, the main contents are as following: In this work, using the real-time data measured by the CE-2 (Chang'E-2) detector, the accuracy of orbit determination is analyzed for the domestic lunar probe under the present condition, and the role played by the VLBI tracking data is particularly reassessed through the precision orbit determination experiments for CE-2. The experiments of the short-arc orbit determination for the lunar probe show that the combination of the ranging and VLBI data with the arc of 15 minutes is able to improve the accuracy by 1-1.5 order of magnitude, compared to the cases for only using the ranging data with the arc of 3 hours. The orbital accuracy is assessed through the orbital overlapping analysis, and the results show that the VLBI data is able to contribute to the CE-2's long-arc POD especially in the along-track and orbital normal directions. For the CE-2's 100 km× 100 km lunar orbit, the position errors are better than 30 meters, and for the CE-2's 15 km× 100 km orbit, the position errors are better than 45 meters. The observational data with the delta differential one-way ranging (Δ DOR) from the CE-2's X-band monitoring and control system experimental are analyzed. It is concluded that the accuracy of Δ DOR delay is dramatically improved with the noise level better than 0.1 ns, and the systematic errors are well calibrated. Although it is unable to support the development of an independent lunar gravity model, the tracking data of CE-2 provided the evaluations of different lunar gravity models through POD, and the accuracies are examined in terms of orbit-to-orbit solution differences for several gravity models. It is found that for the 100 km× 100 km lunar orbit, with a degree and order expansion up to 165, the JPL's gravity model LP165P does not show noticeable improvement over Japan's SGM series models (100× 100), but for the 15 km× 100 km lunar orbit, a higher degree-order model can significantly improve the orbit accuracy. After accomplished its nominal mission, CE-2 launched its extended missions, which involving the L2 mission and the 4179 Toutatis mission. During the flight of the extended missions, the regime offers very little dynamics thus requires an extensive amount of time and tracking data in order to attain a solution. The overlap errors are computed, and it is indicated that the use of VLBI measurements is able to increase the accuracy and reduce the total amount of tracking time. An orbit determination method based on the polynomial fitting is proposed for the CE-3's planned lunar soft landing mission. In this method, spacecraft's dynamic modeling is not necessary, and its noise reduction is expected to be better than that of the point positioning method by making full use of all-arc observational data. The simulation experiments and real data processing showed that the optimal description of the CE-1's free-fall landing trajectory is a set of five-order polynomial functions for each of the position components as well as velocity components in J2000.0. The combination of the VLBI delay, the delay rate data, and the USB (united S-band) ranging data significantly improved the accuracy than the use of USB data alone. In order to determine the position for the CE-3's Lunar Lander, a kinematic statistical method is proposed. This method uses both ranging and VLBI measurements to the lander for a continuous arc, combing with precise knowledge about the motion of the moon as provided by planetary ephemeris, to estimate the lander's position on the lunar surface with high accuracy. Application of the lunar digital elevation model (DEM) as constraints in the lander positioning is helpful. The positioning method for the traverse of lunar rover is also investigated. The integration of delay-rate method is able to achieve higher precise positioning results than the point positioning method. This method provides a wide application of the VLBI data. In the automated sample return mission, the lunar orbit rendezvous and docking are involved. Precise orbit determination using the same-beam VLBI (SBI) measurement for two spacecraft at the same time is analyzed. The simulation results showed that the SBI data is able to improve the absolute and relative orbit accuracy for two targets by 1-2 orders of magnitude. In order to verify the simulation results and test the two-target POD software developed by SHAO (Shanghai Astronomical Observatory), the real SBI data of the SELENE (Selenological and Engineering Explorer) are processed. The POD results for the Rstar and the Vstar showed that the combination of SBI data could significantly improve the accuracy for the two spacecraft, especially for the Vstar with less ranging data, and the POD accuracy is improved by approximate one order of magnitude to the POD accuracy of the Rstar.
Perrin, Maxine; Robillard, Manon; Roy-Charland, Annie
2017-12-01
This study examined eye movements during a visual search task as well as cognitive abilities within three age groups. The aim was to explore scanning patterns across symbol grids and to better understand the impact of symbol location in AAC displays on speed and accuracy of symbol selection. For the study, 60 students were asked to locate a series of symbols on 16 cell grids. The EyeLink 1000 was used to measure eye movements, accuracy, and response time. Accuracy was high across all cells. Participants had faster response times, longer fixations, and more frequent fixations on symbols located in the middle of the grid. Group comparisons revealed significant differences for accuracy and reaction times. The Leiter-R was used to evaluate cognitive abilities. Sustained attention and cognitive flexibility scores predicted the participants' reaction time and accuracy in symbol selection. Findings suggest that symbol location within AAC devices and individuals' cognitive abilities influence the speed and accuracy of retrieving symbols.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
Exploring Capabilities of SENTINEL-2 for Vegetation Mapping Using Random Forest
NASA Astrophysics Data System (ADS)
Saini, R.; Ghosh, S. K.
2018-04-01
Accurate vegetation mapping is essential for monitoring crop and sustainable agricultural practice. This study aims to explore the capabilities of Sentinel-2 data over Landsat-8 Operational Land Imager (OLI) data for vegetation mapping. Two combination of Sentinel-2 dataset have been considered, first combination is 4-band dataset at 10m resolution which consists of NIR, R, G and B bands, while second combination is generated by stacking 4 bands having 10 m resolution along with other six sharpened bands using Gram-Schmidt algorithm. For Landsat-8 OLI dataset, six multispectral bands have been pan-sharpened to have a spatial resolution of 15 m using Gram-Schmidt algorithm. Random Forest (RF) and Maximum Likelihood classifier (MLC) have been selected for classification of images. It is found that, overall accuracy achieved by RF for 4-band, 10-band dataset of Sentinel-2 and Landsat-8 OLI are 88.38 %, 90.05 % and 86.68 % respectively. While, MLC give an overall accuracy of 85.12 %, 87.14 % and 83.56 % for 4-band, 10-band Sentinel and Landsat-8 OLI respectively. Results shown that 10-band Sentinel-2 dataset gives highest accuracy and shows a rise of 3.37 % for RF and 3.58 % for MLC compared to Landsat-8 OLI. However, all the classes show significant improvement in accuracy but a major rise in accuracy is observed for Sugarcane, Wheat and Fodder for Sentinel 10-band imagery. This study substantiates the fact that Sentinel-2 data can be utilized for mapping of vegetation with a good degree of accuracy when compared to Landsat-8 OLI specifically when objective is to map a sub class of vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallstrom, Jason O.; Ni, Zheng Richard
This STTR Phase I project assessed the feasibility of a new CO 2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO 2 concentrations, as well as themore » electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO 2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO 2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO 2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States.« less
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo; Sibanda, Mbulisi; Bangamwabo, Victor; Shoko, Cletah
2017-08-01
The remote sensing of freshwater resources is increasingly becoming important, due to increased patterns of water use and the current or projected impacts of climate change and the rapid invasion by lethal water weeds. This study therefore sought to explore the potential of the recently-launched Landsat 8 OLI/TIRS sensor in mapping invasive species in inland lakes. Specifically, the study compares the performance of the newly-launched Landsat 8 sensor, with more advanced sensor design and image acquisition approach to the traditional Landsat-7 ETM+ in detecting and mapping the water hyacinth (Eichhornia crassipes) invasive species across Lake Chivero, in Zimbabwe. The analysis of variance test was used to identify windows of spectral separability between water hyacinth and other land cover types. The results showed that portions of the visible (B3), NIR (B4), as well as the shortwave bands (Band 8, 9 and 10) of both Landsat 8 OLI and Landsat 7 ETM, exhibited windows of separability between water hyacinth and other land cover types. It was also observed that on the use of Landsat 8 OLI produced high overall classification accuracy of 72%, when compared Landsat 7 ETM, which yielded lower accuracy of 57%. Water hyacinth had optimal accuracies (i.e. 92%), when compared to other land cover types, based on Landsat 8 OLI data. However, when using Landsat 7 ETM data, classification accuracies of water hyacinth were relatively lower (i.e. 67%), when compared to other land cover types (i.e. water with accuracy of 100%). Spectral curves of the old, intermediate and the young water hyacinth in Lake Chivero based on: (a) Landsat 8 OLI, and (b) Landsat 7 ETM were derived. Overall, the findings of this study underscores the relevance of the new generation multispectral sensors in providing primary data-source required for mapping the spatial distribution, and even configuration of water weeds at lower or no cost over time and space.
Chang, Ching-Min; Lo, Yu-Lung; Tran, Nghia-Khanh; Chang, Yu-Jen
2018-03-20
A method is proposed for characterizing the optical properties of articular cartilage sliced from a pig's thighbone using a Stokes-Mueller polarimetry technique. The principal axis angle, phase retardance, optical rotation angle, circular diattenuation, diattenuation axis angle, linear diattenuation, and depolarization index properties of the cartilage sample are all decoupled in the proposed analytical model. Consequently, the accuracy and robustness of the extracted results are improved. The glucose concentration, collagen distribution, and scattering properties of samples from various depths of the articular cartilage are systematically explored via an inspection of the related parameters. The results show that the glucose concentration and scattering effect are both enhanced in the superficial region of the cartilage. By contrast, the collagen density increases with an increasing sample depth.
High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning
NASA Technical Reports Server (NTRS)
Hill, Gerald M.
1997-01-01
To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.
ERIC Educational Resources Information Center
Starkey-Perret, Rebecca; Belan, Sophie; Lê Ngo, Thi Phuong; Rialland, Guillaume
2017-01-01
This chapter presents and discusses the results of a large-scale pilot study carried out in the context of a task-based, blended-learning Business English programme in the Foreign Languages and International Trade department of a French University . It seeks to explore the effects of pre-task planned Focus on Form (FonF) on accuracy in students'…
Libration Point Navigation Concepts Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Quinn, David A.
2004-01-01
This work examines the autonomous navigation accuracy achievable for a lunar exploration trajectory from a translunar libration point lunar navigation relay satellite, augmented by signals from the Global Positioning System (GPS). We also provide a brief analysis comparing the libration point relay to lunar orbit relay architectures, and discuss some issues of GPS usage for cis-lunar trajectories.
Hippocampal lesions, contextual retrieval, and autoshaping in pigeons.
Richmond, Jenny; Colombo, Michael
2002-02-22
Both pigeons and rats with damage to the hippocampus are slow to acquire an autoshaped response and emit fewer overall responses than control animals. Experiment 1 explored the possibility that the autoshaping deficit was due to an impairment in contextual retrieval. Pigeons were trained for 14 days on an autoshaping task in which a red stimulus was followed by reinforcement in context A, and a green stimulus was followed by reinforcement in context B. On day 15, the subjects were given a context test in which the red and green stimuli were presented simultaneously in context A and then later in context B. Both control and hippocampal animals showed context specificity, that is, they responded more to the red stimulus in context A and to the green stimulus in context B. In Experiment 2 we video-recorded the control and hippocampal animals performing the autoshaping task. Hippocampal animals tended to miss-peck the key more often than control animals. In addition, the number of missed pecks increased across days for hippocampal animals but not for control animals, suggesting that while the control animals increased their pecking accuracy, the hippocampal animals actually decreased their pecking accuracy. Our findings suggest that impairments in moving through space may underlie the hippocampal autoshaping deficit.
Establishment of National Gravity Base Network of Iran
NASA Astrophysics Data System (ADS)
Hatam Chavari, Y.; Bayer, R.; Hinderer, J.; Ghazavi, K.; Sedighi, M.; Luck, B.; Djamour, Y.; Le Moign, N.; Saadat, R.; Cheraghi, H.
2009-04-01
A gravity base network is supposed to be a set of benchmarks uniformly distributed across the country and the absolute gravity values at the benchmarks are known to the best accessible accuracy. The gravity at the benchmark stations are either measured directly with absolute devices or transferred by gravity difference measurements by gravimeters from known stations. To decrease the accumulation of random measuring errors arising from these transfers, the number of base stations distributed across the country should be as small as possible. This is feasible if the stations are selected near to the national airports long distances apart but faster accessible and measurable by a gravimeter carried in an airplane between the stations. To realize the importance of such a network, various applications of a gravity base network are firstly reviewed. A gravity base network is the required reference frame for establishing 1st , 2nd and 3rd order gravity networks. Such a gravity network is used for the following purposes: a. Mapping of the structure of upper crust in geology maps. The required accuracy for the measured gravity values is about 0.2 to 0.4 mGal. b. Oil and mineral explorations. The required accuracy for the measured gravity values is about 5 µGal. c. Geotechnical studies in mining areas for exploring the underground cavities as well as archeological studies. The required accuracy is about 5 µGal and better. d. Subsurface water resource explorations and mapping crustal layers which absorb it. An accuracy of the same level of previous applications is required here too. e. Studying the tectonics of the Earth's crust. Repeated precise gravity measurements at the gravity network stations can assist us in identifying systematic height changes. The accuracy of the order of 5 µGal and more is required. f. Studying volcanoes and their evolution. Repeated precise gravity measurements at the gravity network stations can provide valuable information on the gradual upward movement of lava. g. Producing precise mean gravity anomaly for precise geoid determination. Replacing precise spirit leveling by the GPS leveling using precise geoid model is one of the forth coming application of the precise geoid. A gravity base network of 28 stations established over Iran. The stations were built mainly at bedrocks. All stations were measured by an FG5 absolute gravimeter, at least 12 hours at each station, to obtain an accuracy of a few micro gals. Several stations were repeated several times during recent years to estimate the gravity changes.
Autonomous localisation of rovers for future planetary exploration
NASA Astrophysics Data System (ADS)
Bajpai, Abhinav
Future Mars exploration missions will have increasingly ambitious goals compared to current rover and lander missions. There will be a need for extremely long distance traverses over shorter periods of time. This will allow more varied and complex scientific tasks to be performed and increase the overall value of the missions. The missions may also include a sample return component, where items collected on the surface will be returned to a cache in order to be returned to Earth, for further study. In order to make these missions feasible, future rover platforms will require increased levels of autonomy, allowing them to operate without heavy reliance on a terrestrial ground station. Being able to autonomously localise the rover is an important element in increasing the rover's capability to independently explore. This thesis develops a Planetary Monocular Simultaneous Localisation And Mapping (PM-SLAM) system aimed specifically at a planetary exploration context. The system uses a novel modular feature detection and tracking algorithm called hybrid-saliency in order to achieve robust tracking, while maintaining low computational complexity in the SLAM filter. The hybrid saliency technique uses a combination of cognitive inspired saliency features with point-based feature descriptors as input to the SLAM filter. The system was tested on simulated datasets generated using the Planetary, Asteroid and Natural scene Generation Utility (PANGU) as well as two real world datasets which closely approximated images from a planetary environment. The system was shown to provide a higher accuracy of localisation estimate than a state-of-the-art VO system tested on the same data set. In order to be able to localise the rover absolutely, further techniques are investigated which attempt to determine the rover's position in orbital maps. Orbiter Mask Matching uses point-based features detected by the rover to associate descriptors with large features extracted from orbital imagery and stored in the rover memory prior the mission launch. A proof of concept is evaluated using a PANGU simulated boulder field.
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Gros, Eric
2018-03-01
Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
Use of temperature to improve West Nile virus forecasts
Schneider, Zachary D.; Caillouet, Kevin A.; Campbell, Scott R.; Damian, Dan; Irwin, Patrick; Jones, Herff M. P.; Townsend, John
2018-01-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that on average increased absolute forecast accuracy 5%, 10%, 12%, and 6%, respectively, over the non-temperature forced baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperature influences rates of WNV transmission. The findings provide a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs. PMID:29522514
Koffarnus, Mikhail N; Katz, Jonathan L
2011-02-01
Increased signal-detection accuracy on the 5-choice serial reaction time (5-CSRT) task has been shown with drugs that are useful clinically in treating attention deficit hyperactivity disorder (ADHD), but these increases are often small and/or unreliable. By reducing the reinforcer frequency, it may be possible to increase the sensitivity of this task to pharmacologically induced improvements in accuracy. Rats were trained to respond on the 5-CSRT task on a fixed ratio (FR) 1, FR 3, or FR 10 schedule of reinforcement. Drugs that were and were not expected to enhance performance were then administered before experimental sessions. Significant increases in accuracy of signal detection were not typically obtained under the FR 1 schedule with any drug. However, d-amphetamine, methylphenidate, and nicotine typically increased accuracy under the FR 3 and FR 10 schedules. Increasing the FR requirement in the 5-CSRT task increases the likelihood of a positive result with clinically effective drugs, and may more closely resemble conditions in children with attention deficits.
Advances in Spectral Electrical Impedance Tomography (EIT) for Near-Surface Geophysical Exploration
NASA Astrophysics Data System (ADS)
Huisman, J. A.; Zimmermann, E.; Kelter, M.; Zhao, Y.; Bukhary, T. H.; Vereecken, H.
2016-12-01
Recent advances in spectral Electrical Impedance Tomography (EIT) now allow to obtain the complex electrical conductivity distribution in near-surface environments with a high accuracy for a broad range of frequencies (mHz - kHz). One of the key advances has been the development of correction methods to account for inductive coupling effects between wires used for current and potential measurements and capacitive coupling between cables and the subsurface environment. In this study, we first review these novel correction methods and then illustrate how the consideration of capacitive and inductive coupling improves spectral EIT results. For this, borehole EIT measurements were made in a shallow aquifer using a custom-made EIT system with two electrode chains each consisting of eight active electrodes with a separation of 1 m. The EIT measurements were inverted with and without consideration of inductive and capacitive coupling effects. The inversion results showed that spatially and spectrally consistent imaging results can only be obtained when inductive coupling effects are considered (phase accuracy of 1-2 mrad at 1 kHz). Capacitive coupling effects were found to be of secondary importance for the set-up used here, but its importance will increase when longer cables are used. Although these results are promising, the active electrode chains can only be used with our custom-made EIT system. Therefore, we also explored to what extent EIT measurements with passive electrode chains amenable to commercially available EIT measurement systems can be corrected for coupling effects. It was found that EIT measurements with passive unshielded cables could not be corrected above 100 Hz because of the strong but inaccurately known capacitive coupling between the electrical wires. However, it was possible to correct EIT measurements with passive shielded cables, and the final accuracy of the phase measurements was estimated to be 2-4 mrad at 1 kHz.
Jin, Jing; Allison, Brendan Z; Kaufmann, Tobias; Kübler, Andrea; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
2012-01-01
One of the most common types of brain-computer interfaces (BCIs) is called a P300 BCI, since it relies on the P300 and other event-related potentials (ERPs). In the canonical P300 BCI approach, items on a monitor flash briefly to elicit the necessary ERPs. Very recent work has shown that this approach may yield lower performance than alternate paradigms in which the items do not flash but instead change in other ways, such as moving, changing colour or changing to characters overlaid with faces. The present study sought to extend this research direction by parametrically comparing different ways to change items in a P300 BCI. Healthy subjects used a P300 BCI across six different conditions. Three conditions were similar to our prior work, providing the first direct comparison of characters flashing, moving, and changing to faces. Three new conditions also explored facial motion and emotional expression. The six conditions were compared across objective measures such as classification accuracy and bit rate as well as subjective measures such as perceived difficulty. In line with recent studies, our results indicated that the character flash condition resulted in the lowest accuracy and bit rate. All four face conditions (mean accuracy >91%) yielded significantly better performance than the flash condition (mean accuracy = 75%). Objective results reaffirmed that the face paradigm is superior to the canonical flash approach that has dominated P300 BCIs for over 20 years. The subjective reports indicated that the conditions that yielded better performance were not considered especially burdensome. Therefore, although further work is needed to identify which face paradigm is best, it is clear that the canonical flash approach should be replaced with a face paradigm when aiming at increasing bit rate. However, the face paradigm has to be further explored with practical applications particularly with locked-in patients.
Testing the accuracy of clustering redshifts with simulations
NASA Astrophysics Data System (ADS)
Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.
2018-03-01
We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.
NASA Astrophysics Data System (ADS)
Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju
2018-02-01
Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.
Navon letters affect face learning and face retrieval.
Lewis, Michael B; Mills, Claire; Hills, Peter J; Weston, Nicola
2009-01-01
Identifying the local letters of a Navon letter (a large letter made up of smaller different letters) prior to recognition causes impairment in accuracy, while identifying the global letters of a Navon letter causes an enhancement in recognition accuracy (Macrae & Lewis, 2002). This effect may result from a transfer-inappropriate processing shift (TIPS) (Schooler, 2002). The present experiment extends research on the underlying mechanism of this effect by exploring this Navon effect on face learning as well as face recognition. The results of the two experiments revealed that when the Navon task used at retrieval was the same as that used at encoding then the performance accuracy is enhanced, whereas when the processing operations mismatch at retrieval and at encoding, this impairs recognition accuracy. These results provide support for the TIPS explanation of the Navon effect.
Exploring the Solar System using stellar occultations
NASA Astrophysics Data System (ADS)
Sicardy, Bruno
2018-04-01
Stellar occultations by solar system objects allow kilometric accuracy, permit the detection of tenuous atmospheres (at nbar level), and the discovery of rings. The main limitation was the prediction accuracy, typically 40 mas, corresponding to about 1,000 km projected at the body. This lead to large time dedicated to astrometry, tedious logistical issues, and more often than not, mere miss of the event. The Gaia catalog, with sub-mas accuracy, hugely improves both the star positions, resulting in achievable accuracies of about 1 mas for the shadow track on Earth. This permits much more carefully planned campaigns, with success rate approaching 100%, weather permitting. Scientific perspectives are presented, e.g. central flashes caused by Plutos atmosphere revealing hazes and winds near its surface, grazing occultations showing topographic features, occultations by Chariklos rings unveiling dynamical features such as proper mode ``breathing''.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Classification of AB O 3 perovskite solids: a machine learning study
Pilania, G.; Balachandran, P. V.; Gubernatis, J. E.; ...
2015-07-23
Here we explored the use of machine learning methods for classifying whether a particularABO 3chemistry forms a perovskite or non-perovskite structured solid. Starting with three sets of feature pairs (the tolerance and octahedral factors, theAandBionic radii relative to the radius of O, and the bond valence distances between theAandBions from the O atoms), we used machine learning to create a hyper-dimensional partial dependency structure plot using all three feature pairs or any two of them. Doing so increased the accuracy of our predictions by 2–3 percentage points over using any one pair. We also included the Mendeleev numbers of theAandBatomsmore » to this set of feature pairs. Moreover, doing this and using the capabilities of our machine learning algorithm, the gradient tree boosting classifier, enabled us to generate a new type of structure plot that has the simplicity of one based on using just the Mendeleev numbers, but with the added advantages of having a higher accuracy and providing a measure of likelihood of the predicted structure.« less
Potential of fecal microbiota for early-stage detection of colorectal cancer
Zeller, Georg; Tap, Julien; Voigt, Anita Y; Sunagawa, Shinichi; Kultima, Jens Roat; Costea, Paul I; Amiot, Aurélien; Böhm, Jürgen; Brunetti, Francesco; Habermann, Nina; Hercog, Rajna; Koch, Moritz; Luciani, Alain; Mende, Daniel R; Schneider, Martin A; Schrotz-King, Petra; Tournigand, Christophe; Tran Van Nhieu, Jeanne; Yamada, Takuji; Zimmermann, Jürgen; Benes, Vladimir; Kloor, Matthias; Ulrich, Cornelia M; von Knebel Doeberitz, Magnus; Sobhani, Iradj; Bork, Peer
2014-01-01
Several bacterial species have been implicated in the development of colorectal carcinoma (CRC), but CRC-associated changes of fecal microbiota and their potential for cancer screening remain to be explored. Here, we used metagenomic sequencing of fecal samples to identify taxonomic markers that distinguished CRC patients from tumor-free controls in a study population of 156 participants. Accuracy of metagenomic CRC detection was similar to the standard fecal occult blood test (FOBT) and when both approaches were combined, sensitivity improved > 45% relative to the FOBT, while maintaining its specificity. Accuracy of metagenomic CRC detection did not differ significantly between early- and late-stage cancer and could be validated in independent patient and control populations (N = 335) from different countries. CRC-associated changes in the fecal microbiome at least partially reflected microbial community composition at the tumor itself, indicating that observed gene pool differences may reveal tumor-related host–microbe interactions. Indeed, we deduced a metabolic shift from fiber degradation in controls to utilization of host carbohydrates and amino acids in CRC patients, accompanied by an increase of lipopolysaccharide metabolism. PMID:25432777
The impact of musical training and tone language experience on talker identification
Xie, Xin; Myers, Emily
2015-01-01
Listeners can use pitch changes in speech to identify talkers. Individuals exhibit large variability in sensitivity to pitch and in accuracy perceiving talker identity. In particular, people who have musical training or long-term tone language use are found to have enhanced pitch perception. In the present study, the influence of pitch experience on talker identification was investigated as listeners identified talkers in native language as well as non-native languages. Experiment 1 was designed to explore the influence of pitch experience on talker identification in two groups of individuals with potential advantages for pitch processing: musicians and tone language speakers. Experiment 2 further investigated individual differences in pitch processing and the contribution to talker identification by testing a mediation model. Cumulatively, the results suggested that (a) musical training confers an advantage for talker identification, supporting a shared resources hypothesis regarding music and language and (b) linguistic use of lexical tones also increases accuracy in hearing talker identity. Importantly, these two types of hearing experience enhance talker identification by sharpening pitch perception skills in a domain-general manner. PMID:25618071
The impact of musical training and tone language experience on talker identification.
Xie, Xin; Myers, Emily
2015-01-01
Listeners can use pitch changes in speech to identify talkers. Individuals exhibit large variability in sensitivity to pitch and in accuracy perceiving talker identity. In particular, people who have musical training or long-term tone language use are found to have enhanced pitch perception. In the present study, the influence of pitch experience on talker identification was investigated as listeners identified talkers in native language as well as non-native languages. Experiment 1 was designed to explore the influence of pitch experience on talker identification in two groups of individuals with potential advantages for pitch processing: musicians and tone language speakers. Experiment 2 further investigated individual differences in pitch processing and the contribution to talker identification by testing a mediation model. Cumulatively, the results suggested that (a) musical training confers an advantage for talker identification, supporting a shared resources hypothesis regarding music and language and (b) linguistic use of lexical tones also increases accuracy in hearing talker identity. Importantly, these two types of hearing experience enhance talker identification by sharpening pitch perception skills in a domain-general manner.
Distinct mechanisms for the impact of distraction and interruption on working memory in aging
Clapp, Wesley C; Gazzaley, Adam
2010-01-01
Interference is known to negatively impact the ability to maintain information in working memory (WM), an effect that is exacerbated with aging. Here, we explore how distinct sources of interference, i.e., distraction (stimuli to-be-ignored) and interruption (stimuli requiring attention), differentially influence WM in younger and older adults. EEG was recorded while participants engaged in three versions of a delayed-recognition task: no interference, a distracting stimulus, and an interrupting stimulus presented during WM maintenance. Behaviorally, both types of interference negatively impacted WM accuracy in older adults significantly more than younger adults (with a larger deficit for interruptions). N170 latency measures revealed that the degree of processing both distractors and interruptors predicted WM accuracy in both populations. However, while WM impairments could be explained by excessive attention to distractors by older adults (a suppression deficit), impairment induced by interruption were not clearly mediated by age-related increases in attention to interruptors. These results suggest that distinct underlying mechanisms mediate the impact of different types of external interference on WM in normal aging. PMID:20144492
NASA Astrophysics Data System (ADS)
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-01-01
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k-nearest neighbor (k-NN) method produced the greatest accuracy. A geostatistically-weighted k-NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer’s and user’s accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas. PMID:27879661
Leveraging Long-term Seismic Catalogs for Automated Real-time Event Classification
NASA Astrophysics Data System (ADS)
Linville, L.; Draelos, T.; Pankow, K. L.; Young, C. J.; Alvarez, S.
2017-12-01
We investigate the use of labeled event types available through reviewed seismic catalogs to produce automated event labels on new incoming data from the crustal region spanned by the cataloged events. Using events cataloged by the University of Utah Seismograph Stations between October, 2012 and June, 2017, we calculate the spectrogram for a time window that spans the duration of each event as seen on individual stations, resulting in 110k event spectrograms (50% local earthquakes examples, 50% quarry blasts examples). Using 80% of the randomized example events ( 90k), a classifier is trained to distinguish between local earthquakes and quarry blasts. We explore variations of deep learning classifiers, incorporating elements of convolutional and recurrent neural networks. Using a single-layer Long Short Term Memory recurrent neural network, we achieve 92% accuracy on the classification task on the remaining 20K test examples. Leveraging the decisions from a group of stations that detected the same event by using the median of all classifications in the group increases the model accuracy to 96%. Additional data with equivalent processing from 500 more recently cataloged events (July, 2017), achieves the same accuracy as our test data on both single-station examples and multi-station medians, suggesting that the model can maintain accurate and stable classification rates on real-time automated events local to the University of Utah Seismograph Stations, with potentially minimal levels of re-training through time.
Day, J D; Weaver, L D; Franti, C E
1995-01-01
The objective of this prospective cohort study was to determine the sensitivity, specificity, accuracy, and predictive value of twin pregnancy diagnosis by rectal palpation and to examine fetal survival, culling rates, and gestational lengths of cows diagnosed with twins. In this prospective study, 5309 cows on 14 farms in California were followed from pregnancy diagnosis to subsequent abortion or calving. The average sensitivity, specificity, accuracy, and predictive value of twin pregnancy diagnosis were 49.3%, 99.4%, 96.0%, and 86.1%, respectively. The abortion rate for single pregnancies of 12.0% differed significantly from those for bicornual twin pregnancies and unicornual twin pregnancies of 26.2% and 32.4%, respectively (P < 0.05). The early calf mortality between cows calving with singles (3.2%) and twins (15.7%) were significantly different (P < 0.005). The difference in fetal survival between single pregnancies and all twin pregnancies resulted in 0.42 and 0.29 viable heifers per pregnancy, respectively. The average gestation for single, bicornual, and unicornual pregnancies that did not abort before drying-off was 278, 272, and 270 days, respectively. Results of this study show that there is an increased fetal wastage associated with twin pregnancies and suggest a need for further research exploring management strategies for cows carrying twins. PMID:7728734
An ITK framework for deterministic global optimization for medical image registration
NASA Astrophysics Data System (ADS)
Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.
2006-03-01
Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.
Alternative face models for 3D face registration
NASA Astrophysics Data System (ADS)
Salah, Albert Ali; Alyüz, Neşe; Akarun, Lale
2007-01-01
3D has become an important modality for face biometrics. The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a one-to-all registration approach, which means each new facial surface is registered to all faces in the gallery, at a great computational cost. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. Going one step further, we propose that using a couple of well-selected AFMs can trade-off computation time with accuracy. Drawing on cognitive justifications, we propose to employ category-specific alternative average face models for registration, which is shown to increase the accuracy of the subsequent recognition. We inspect thin-plate spline (TPS) and iterative closest point (ICP) based registration schemes under realistic assumptions on manual or automatic landmark detection prior to registration. We evaluate several approaches for the coarse initialization of ICP. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Finally, we perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We report our results on the FRGC 3D face database.
Effects of metabolic syndrome on language functions in aging.
Cahana-Amitay, Dalia; Spiro, Avron; Cohen, Jason A; Oveis, Abigail C; Ojo, Emmanuel A; Sayers, Jesse T; Obler, Loraine K; Albert, Martin L
2015-02-01
This study explored effects of the metabolic syndrome (MetS) on language in aging. MetS is a constellation of five vascular and metabolic risk factors associated with the development of chronic diseases and increased risk of mortality, as well as brain and cognitive impairments. We tested 281 English-speaking older adults aged 55-84, free of stroke and dementia. Presence of MetS was based on the harmonized criteria (Alberti et al., 2009). Language performance was assessed by measures of accuracy and reaction time on two tasks of lexical retrieval and two tasks of sentence processing. Regression analyses, adjusted for age, education, gender, diabetes, hypertension, and heart disease, demonstrated that participants with MetS had significantly lower accuracy on measures of lexical retrieval (action naming) and sentence processing (embedded sentences, both subject and object relative clauses). Reaction time was slightly faster on the test of embedded sentences among those with MetS. MetS adversely affects the language performance of older adults, impairing accuracy of both lexical retrieval and sentence processing. This finding reinforces and extends results of earlier research documenting the negative influence of potentially treatable medical conditions (diabetes, hypertension) on language performance in aging. The unanticipated finding that persons with MetS were faster in processing embedded sentences may represent an impairment of timing functions among older individuals with MetS.
Bertoux, Maxime; de Souza, Leonardo Cruz; O'Callaghan, Claire; Greve, Andrea; Sarazin, Marie; Dubois, Bruno; Hornberger, Michael
2016-01-01
Relative sparing of episodic memory is a diagnostic criterion of behavioral variant frontotemporal dementia (bvFTD). However, increasing evidence suggests that bvFTD patients can show episodic memory deficits at a similar level as Alzheimer's disease (AD). Social cognition tasks have been proposed to distinguish bvFTD, but no study to date has explored the utility of such tasks for the diagnosis of amnestic bvFTD. Here, we contrasted social cognition performance of amnestic and non-amnestic bvFTD from AD, with a subgroup having confirmed in vivo pathology markers. Ninety-six participants (38 bvFTD and 28 AD patients as well as 30 controls) performed the short Social-cognition and Emotional Assessment (mini-SEA). BvFTD patients were divided into amnestic versus non-amnestic presentation using the validated Free and Cued Selective Reminding Test (FCSRT) assessing episodic memory. As expected, the accuracy of the FCSRT to distinguish the overall bvFTD group from AD was low (69.7% ) with ∼50% of bvFTD patients being amnestic. By contrast, the diagnostic accuracy of the mini-SEA was high (87.9% ). When bvFTD patients were split on the level of amnesia, mini-SEA diagnostic accuracy remained high (85.1% ) for amnestic bvFTD versus AD and increased to very high (93.9% ) for non-amnestic bvFTD versus AD. Social cognition deficits can distinguish bvFTD and AD regardless of amnesia to a high degree and provide a simple way to distinguish both diseases at presentation. These findings have clear implications for the diagnostic criteria of bvFTD. They suggest that the emphasis should be on social cognition deficits with episodic memory deficits not being a helpful diagnostic criterion in bvFTD.
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.
2012-04-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less
Using the MMPI 168 with Medical Inpatients
ERIC Educational Resources Information Center
Erickson, Richard C.; Freeman, Charles
1976-01-01
Explores the potential utility of the MMPI 168 with two inpatient medical populations. Correlations and clinically relevant comparisons suggest that the MMPI 168 predicted the standard MMPI with a high degree accuracy. (Editor/RK)
Fisher Center for Alzheimer's Research Foundation
... We are making a major impact on Alzheimer’s research. Our scientific discoveries are featured in top publications ... is vetted by scientists for accuracy. Explore Our Research Nobel Prize Winner Dr. Paul Greengard leads our ...
Movement amplitude and tempo change in piano performance
NASA Astrophysics Data System (ADS)
Palmer, Caroline
2004-05-01
Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
3D printing from MRI Data: Harnessing strengths and minimizing weaknesses.
Ripley, Beth; Levin, Dmitry; Kelil, Tatiana; Hermsen, Joshua L; Kim, Sooah; Maki, Jeffrey H; Wilson, Gregory J
2017-03-01
3D printing facilitates the creation of accurate physical models of patient-specific anatomy from medical imaging datasets. While the majority of models to date are created from computed tomography (CT) data, there is increasing interest in creating models from other datasets, such as ultrasound and magnetic resonance imaging (MRI). MRI, in particular, holds great potential for 3D printing, given its excellent tissue characterization and lack of ionizing radiation. There are, however, challenges to 3D printing from MRI data as well. Here we review the basics of 3D printing, explore the current strengths and weaknesses of printing from MRI data as they pertain to model accuracy, and discuss considerations in the design of MRI sequences for 3D printing. Finally, we explore the future of 3D printing and MRI, including creative applications and new materials. 5 J. Magn. Reson. Imaging 2017;45:635-645. © 2016 International Society for Magnetic Resonance in Medicine.
Exploring Chemical Space with the Alchemical Derivatives.
Balawender, Robert; Welearegay, Meressa A; Lesiuk, Michał; De Proft, Frank; Geerlings, Paul
2013-12-10
In this paper, we verify the usefulness of the alchemical derivatives in the prediction of chemical properties. We concentrate on the stability of the transmutation products, where the term "transmutation" means the change of the nuclear charge at an atomic site at constant number of electrons. As illustrative transmutations showing the potential of the method in exploring chemical space, we present some examples of increasing complexity starting with the deprotonation, continuing with the transmutation of the nitrogen molecule, and ending with the substitution of isoelectronic B-N units for C-C units and N units for C-H units in carbocyclic systems. The basis set influence on the qualitative and quantitative accuracies of the alchemical predictions was investigated. The alchemical deprotonation energy (from the second order Taylor expansion) correlates well with the vertical deprotonation energy and can be used as a preliminary indicator for the experimental deprotonation energy. The results of calculations for the BN derivatives of benzene and pyrene show that this method has great potential for efficient and accurate scanning of chemical space.
Wang, Zhifei; Xie, Yanming; Wang, Yongyan
2011-10-01
Computerizing extracting information from Chinese medicine literature seems more convenient than hand searching, which could simplify searching process and improve the accuracy. However, many computerized auto-extracting methods are increasingly used, regular expression is so special that could be efficient for extracting useful information in research. This article focused on regular expression applying in extracting information from Chinese medicine literature. Two practical examples were reported in this article about regular expression to extract "case number (non-terminology)" and "efficacy rate (subgroups for related information identification)", which explored how to extract information in Chinese medicine literature by means of some special research method.
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu
2018-06-01
Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat
2017-09-01
Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
Rapid race perception despite individuation and accuracy goals.
Kubota, Jennifer T; Ito, Tiffany
2017-08-01
Perceivers rapidly process social category information and form stereotypic impressions of unfamiliar others. However, a goal to individuate a target or to accurately predict their behavior can result in individuated impressions. It is unknown how the combination of both accuracy and individuation goals affects perceptual category processing. To explore this, participants were given both the goal to individuate targets and accurately predict behavior. We then recorded event-related brain potentials while participants viewed photos of black and white males along with four pieces of individuating information in the form of descriptions of past behavior. Even with explicit individuation and accuracy task goals, participants rapidly differentiated targets by race within 200 ms. Importantly, this rapid categorical processing did not influence behavioral outcomes as participants made individuated predictions. These findings indicate that individuals engage in category processing even when provided with individuation and accuracy goals, but that this processing does not necessarily result in category-based judgments.
Social Power Increases Interoceptive Accuracy
Moeini-Jazani, Mehrad; Knoeferle, Klemens; de Molière, Laura; Gatti, Elia; Warlop, Luk
2017-01-01
Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research. PMID:28824501
Ikeda, Hidetoshi; Abe, Takehiko; Watanabe, Kazuo
2010-04-01
Fifty to eighty percent of Cushing disease is diagnosed by typical endocrine responses. Recently, the number of diagnoses of Cushing disease without typical Cushing syndrome has been increasing; therefore, improving ways to determine the localization of the adenoma and making an early diagnosis is important. This study was undertaken to determine the present diagnostic accuracy for Cushing microadenoma and to compare the differences in diagnostic accuracy between MR imaging and PET/MR imaging. During the past 3 years the authors analyzed the diagnostic accuracy in a series of 35 patients with Cushing adenoma that was verified by surgical pituitary exploration. All 35 cases of Cushing disease, including 20 cases of "overt" and 15 cases of "preclinical" Cushing disease, were studied. Superconductive MR images (1.5 or 3.0 T) and composite images from FDG-PET or methionine (MET)-PET and 3.0-T MR imaging were compared with the localization of adenomas verified by surgery. The diagnostic accuracy of superconductive MR imaging for detecting the localization of Cushing microadenoma was only 40%. The causes of unsatisfactory results for superconductive MR imaging were false-negative results (10 cases), false-positive results (6 cases), and instances of double pituitary adenomas (3 cases). In contrast, the accuracy of microadenoma localization using MET-PET/3.0-T MR imaging was 100% and that of FDG-PET/3.0-T MR imaging was 73%. Moreover, the adenoma location was better delineated on MET-PET/MR images than on FDG-PET/MR images. There was no significant difference in maximum standard uptake value of adenomas evaluated by MET-PET between preclinical Cushing disease and overt Cushing disease. Composite MET-PET/3.0-T MR imaging is useful for the improvement of the delineation of Cushing microadenoma and offers high-quality detectability for early-stage Cushing adenoma.
Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter
2017-05-12
A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P < 0.05). GFBLUP provides a framework for integrating multiple layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set test might be used as a first-step to improve GFBLUP models. Approaches like GFBLUP and SNP set test will become increasingly useful, as the functional annotations of genomes keep accumulating for a range of species and traits.
Development of a Near Ground Remote Sensing System
Zhang, Yanchao; Xiao, Yuzhao; Zhuang, Zaichun; Zhou, Liping; Liu, Fei; He, Yong
2016-01-01
Unmanned Aerial Vehicles (UAVs) have shown great potential in agriculture and are increasingly being developed for agricultural use. There are still a lot of experiments that need to be done to improve their performance and explore new uses, but experiments using UAVs are limited by many conditions like weather and location and the time it takes to prepare for a flight. To promote UAV remote sensing, a near ground remote sensing platform was developed. This platform consists of three major parts: (1) mechanical structures like a horizontal rail, vertical cylinder, and three axes gimbal; (2) power supply and control parts; (3) onboard application components. This platform covers five degrees of freedom (DOFs): horizontal, vertical, pitch, roll, yaw. A stm32 ARM single chip was used as the controller of the whole platform and another stm32 MCU was used to stabilize the gimbal. The gimbal stabilizer communicates with the main controller via a CAN bus. A multispectral camera was mounted on the gimbal. Software written in C++ language was developed as the graphical user interface. Operating parameters were set via this software and the working status was displayed in this software. To test how well the system works, a laser distance meter was used to measure the slide rail’s repeat accuracy. A 3-axis vibration analyzer was used to test the system stability. Test results show that the horizontal repeat accuracy was less than 2 mm; vertical repeat accuracy was less than 1 mm; vibration was less than 2 g and remained at an acceptable level. This system has high accuracy and stability and can therefore be used for various near ground remote sensing studies. PMID:27164111
NASA Astrophysics Data System (ADS)
Shin, Dong-Youn; Kim, Minsung
2017-02-01
Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μ m for the fictitious droplets of 50 μ m in diameter and -1.2 ± 0.3 μ m for the fictitious droplets of 30 μ m in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μ m . When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μ m at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.
Shin, Dong-Youn; Kim, Minsung
2017-02-01
Despite the inherent fabrication simplicity of piezo drop-on-demand inkjet printing, the non-uniform deposition of colourants or electroluminescent organic materials leads to faulty display products, and hence, the importance of rapid jetting status inspection and accurate droplet volume measurement increases from a process perspective. In this work, various jetting status inspections and droplet volume measurement methods are reviewed by discussing their advantages and disadvantages, and then, the opportunities for the developed prototype with a scanning mirror are explored. This work demonstrates that jetting status inspection of 384 fictitious droplets can be performed within 17 s with maximum and minimum measurement accuracies of 0.2 ± 0.5 μm for the fictitious droplets of 50 μm in diameter and -1.2 ± 0.3 μm for the fictitious droplets of 30 μm in diameter, respectively. In addition to the new design of an inkjet monitoring instrument with a scanning mirror, two novel methods to accurately measure the droplet volume by amplifying a minute droplet volume difference and then converting to other physical properties are suggested and the droplet volume difference of ±0.3% is demonstrated to be discernible using numerical simulations, even with the low measurement accuracy of 1 μm. When the fact is considered that the conventional vision-based method with a CCD camera requires the optical measurement accuracy less than 25 nm to measure the volume of an in-flight droplet in the nominal diameter of 50 μm at the same volume measurement accuracy, the suggested method with the developed prototype offers a whole new opportunity to inkjet printing for display applications.
Rangachari, Pavani
2008-01-01
CONTEXT/PURPOSE: With the growing momentum toward hospital quality measurement and reporting by public and private health care payers, hospitals face increasing pressures to improve their medical record documentation and administrative data coding accuracy. This study explores the relationship between the organizational knowledge-sharing structure related to quality and hospital coding accuracy for quality measurement. Simultaneously, this study seeks to identify other leadership/management characteristics associated with coding for quality measurement. Drawing upon complexity theory, the literature on "professional complex systems" has put forth various strategies for managing change and turnaround in professional organizations. In so doing, it has emphasized the importance of knowledge creation and organizational learning through interdisciplinary networks. This study integrates complexity, network structure, and "subgoals" theories to develop a framework for knowledge-sharing network effectiveness in professional complex systems. This framework is used to design an exploratory and comparative research study. The sample consists of 4 hospitals, 2 showing "good coding" accuracy for quality measurement and 2 showing "poor coding" accuracy. Interviews and surveys are conducted with administrators and staff in the quality, medical staff, and coding subgroups in each facility. Findings of this study indicate that good coding performance is systematically associated with a knowledge-sharing network structure rich in brokerage and hierarchy (with leaders connecting different professional subgroups to each other and to the external environment), rather than in density (where everyone is directly connected to everyone else). It also implies that for the hospital organization to adapt to the changing environment of quality transparency, senior leaders must undertake proactive and unceasing efforts to coordinate knowledge exchange across physician and coding subgroups and connect these subgroups with the changing external environment.
Deep-learning derived features for lung nodule classification with limited datasets
NASA Astrophysics Data System (ADS)
Thammasorn, P.; Wu, W.; Pierce, L. A.; Pipavath, S. N.; Lampe, P. D.; Houghton, A. M.; Haynor, D. R.; Chaovalitwongse, W. A.; Kinahan, P. E.
2018-02-01
Only a few percent of indeterminate nodules found in lung CT images are cancer. However, enabling earlier diagnosis is important to avoid invasive procedures or long-time surveillance to those benign nodules. We are evaluating a classification framework using radiomics features derived with a machine learning approach from a small data set of indeterminate CT lung nodule images. We used a retrospective analysis of 194 cases with pulmonary nodules in the CT images with or without contrast enhancement from lung cancer screening clinics. The nodules were contoured by a radiologist and texture features of the lesion were calculated. In addition, sematic features describing shape were categorized. We also explored a Multiband network, a feature derivation path that uses a modified convolutional neural network (CNN) with a Triplet Network. This was trained to create discriminative feature representations useful for variable-sized nodule classification. The diagnostic accuracy was evaluated for multiple machine learning algorithms using texture, shape, and CNN features. In the CT contrast-enhanced group, the texture or semantic shape features yielded an overall diagnostic accuracy of 80%. Use of a standard deep learning network in the framework for feature derivation yielded features that substantially underperformed compared to texture and/or semantic features. However, the proposed Multiband approach of feature derivation produced results similar in diagnostic accuracy to the texture and semantic features. While the Multiband feature derivation approach did not outperform the texture and/or semantic features, its equivalent performance indicates promise for future improvements to increase diagnostic accuracy. Importantly, the Multiband approach adapts readily to different size lesions without interpolation, and performed well with relatively small amount of training data.
The diagnostic accuracy of the MyDiagnostick to detect atrial fibrillation in primary care
2014-01-01
Background Atrial fibrillation is very common in people aged 65 or older. This condition increases the risk of death, congestive heart failure and thromboembolic conditions. Many patients with atrial fibrillation are asymptomatic and a cerebrovascular accident (CVA) is often the first clinical presentation. Guidelines concerning the prevention of CVA recommend monitoring the heart rate in patients aged 65 or older. Recently, the MyDiagnostick (Applied Biomedical Systems BV, Maastricht, The Netherlands) was introduced as a new screening tool which might serve as an alternative for the less accurate pulse palpation. This study was designed to explore the diagnostic accuracy of the MyDiagnostick for the detection of atrial fibrillation. Methods A phase II diagnostic accuracy study in a convenience sample of 191 subjects recruited in primary care. The majority of participants were patients with a known history of atrial fibrillation (n = 161). Readings of the MyDiagnostick were compared with electrocardiographic recordings. Sensitivity and specificity and their 95% confidence interval were calculated using 2x2 tables. Results A prevalence of 54% for an atrial fibrillation rhythm was found in the study population at the moment of the study. A combination of three measurements with the MyDiagnostick for each patient showed a sensitivity of 94% (95% CI 87 – 98) and a specificity of 93% (95% CI 85 – 97). Conclusion The MyDiagnostick is an easy-to-use device that showed a good diagnostic accuracy with a high sensitivity and specificity for atrial fibrillation in a convenience sample in primary care. Future research is needed to determine the place of the MyDiagnostick in possible screening or case-finding strategies for atrial fibrillation. PMID:24913608
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2014-11-01
This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.
Spectral-element Method for 3D Marine Controlled-source EM Modeling
NASA Astrophysics Data System (ADS)
Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.
2017-12-01
As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
Determining the refractive index of particles using glare-point imaging technique
NASA Astrophysics Data System (ADS)
Meng, Rui; Ge, Baozhen; Lu, Qieni; Yu, Xiaoxue
2018-04-01
A method of measuring the refractive index of a particle is presented from a glare-point image. The space of a doublet image of a particle can be determined with high accuracy by using auto-correlation and Gaussian interpolation, and then the refractive index is obtained from glare-point separation, and a factor that may influence the accuracy of glare-point separation is explored. Experiments are carried out for three different kinds of particles, including polystyrene latex particles, glass beads, and water droplets, whose measuring accuracy is improved by the data fitting method. The research results show that the method presented in this paper is feasible and beneficial to applications such as spray and atmospheric composition measurements.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
Dahal, Govinda; Qayyum, Adnan; Ferreyra, Mariella; Kassim, Hussein; Pottie, Kevin
2014-10-01
This paper explores immigrant community leaders' perspectives on culturally appropriate diabetes education and care. We conducted exploratory workshops followed by focus groups with Punjabi, Nepali, Somali, and Latin American immigrant communities in Ottawa, Ontario. We used the constant comparative method of grounded theory to explore issues of trust and its impact on access and effectiveness of care. Detailed inquiry revealed the cross cutting theme of trust at the "entry" level and in relation to "accuracy" of diabetes information, as well as the influence of trust on personal "privacy" and on the "uptake" of recommendations. These four dimensions of trust stood out among immigrant community leaders: entry level, accuracy level, privacy level, and intervention level and were considered important attributes of culturally appropriate diabetes education and care. These dimensions of trust may promote trust at the patient-practitioner level and also may help build trust in the health care system.
Massey, Jessica S; Meares, Susanne; Batchelor, Jennifer; Bryant, Richard A
2015-07-01
Few studies have examined whether psychological distress and pain affect cognitive functioning in the acute to subacute phase (up to 30 days postinjury) following mild traumatic brain injury (mTBI). The current study explored whether acute posttraumatic stress, depression, and pain were associated with performance on a task of selective and sustained attention completed under conditions of increasing cognitive demands (standard, auditory distraction, and dual-task), and on tests of working memory, memory, processing speed, reaction time (RT), and verbal fluency. At a mean of 2.87 days (SD = 2.32) postinjury, 50 adult mTBI participants, consecutive admissions to a Level 1 trauma hospital, completed neuropsychological tests and self-report measures of acute posttraumatic stress, depression, and pain. A series of canonical correlation analyses was used to explore the relationships of a common set of psychological variables to various sets of neuropsychological variables. Significant results were found on the task of selective and sustained attention. Strong relationships were found between psychological variables and speed (r(c) = .56, p = .02) and psychological variables and accuracy (r(c) = .68, p = .002). Pain and acute posttraumatic stress were associated with higher speed scores (reflecting more correctly marked targets) under standard conditions. Acute posttraumatic stress was associated with lower accuracy scores across all task conditions. Moderate but nonsignificant associations were found between psychological variables and most cognitive tasks. Acute posttraumatic stress and pain show strong associations with selective and sustained attention following mTBI. (c) 2015 APA, all rights reserved).
Vegetation index methods for estimating evapotranspiration by remote sensing
Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.
2010-01-01
Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.
Priming the holiday spirit: persistent activation due to extraexperimental experiences.
Coane, Jennifer H; Balota, David A
2009-12-01
The concept of activation is a critical component of many models of cognition. A key characteristic of activation is that recent experience with a concept or stimulus increases the accessibility of the corresponding representation. The extent to which increases in accessibility occur as a result of experiences outside of laboratory settings has not been extensively explored. In the present study, we presented lexical stimuli associated with different holidays and festivities over the course of a year in a lexical decision task. When stimulus meaning and time of testing were congruent (e.g., leprechaun in March), response times were faster and accuracy greater than when meaning and time of test were incongruent (e.g., leprechaun in November). Congruency also benefited performance on a surprise free recall task of the items presented earlier in the lexical decision task. The discussion focuses on potential theoretical accounts of this heightened accessibility of time-of-the-year-relevant concepts.
Increasing the lensing figure of merit through higher order convergence moments
NASA Astrophysics Data System (ADS)
Vicinanza, Martina; Cardone, Vincenzo F.; Maoli, Roberto; Scaramella, Roberto; Er, Xinzhong
2018-01-01
The unprecedented quality, the increased data set, and the wide area of ongoing and near future weak lensing surveys allows one to move beyond the standard two points statistics, thus making it worthwhile to investigate higher order probes. As an interesting step toward this direction, we explore the use of higher order moments (HOM) of the convergence field as a way to increase the lensing figure of merit (FoM). To this end, we rely on simulated convergence to first show that HOM can be measured and calibrated so that it is indeed possible to predict them for a given cosmological model provided suitable nuisance parameters are introduced and then marginalized over. We then forecast the accuracy on cosmological parameters from the use of HOM alone and in combination with standard shear power spectra tomography. It turns out that HOM allow one to break some common degeneracies, thus significantly boosting the overall FoM. We also qualitatively discuss possible systematics and how they can be dealt with.
Effect of vibration frequency on biopsy needle insertion force.
Tan, Lei; Qin, Xuemei; Zhang, Qinhe; Zhang, Hongcai; Dong, Hongjian; Guo, Tuodang; Liu, Guowei
2017-05-01
Needle insertion is critical in many clinical medicine procedures, such as biopsy, brachytherapy, and injection therapy. A platform with two degrees of freedom was set up to study the effect of vibration frequency on needle insertion force. The gel phantom deformation at the needle cutting edge and the Voigt model are utilized to develop a dynamic model to explain the relationship between the insertion force and needle-tip velocity. The accuracy of this model was verified by performing needle insertions into phantom gel. The effect of vibration on insertion force can be explained as the vibration increasing the needle-tip velocity and subsequently increasing the insertion force. In a series of needle insertion experiments with different vibration frequencies, the peak forces were selected for comparison to explore the effect of vibration frequency on needle insertion force. The experimental results indicate that the insertion force at 500Hz increases up to 17.9% compared with the force at 50Hz. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
The Value of Photographic Observations in Improving the Accuracy of Satellite Orbits.
1982-02-01
cameras in the years 1971 -3 have recently become available, particularly of the balloon-satellite Explorer 19, from the observing stations at Riga...from the Russian AFU-75 cameras in the years 1971 -1973 have recently become available, particularly of the balloon- satellite Explorer 19, from the...large numbers of observations frum the Russian AFU-75 cameras have become available, covering the years 1971 -3. The observations, made during the
ERIC Educational Resources Information Center
Goomas, David T.
2012-01-01
The effects of wireless ring scanners, which provided immediate auditory and visual feedback, were evaluated to increase the performance and accuracy of order selectors at a meat distribution center. The scanners not only increased performance and accuracy compared to paper pick sheets, but were also instrumental in immediate and accurate data…
NASA Astrophysics Data System (ADS)
Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.
2016-12-01
The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn
NASA Astrophysics Data System (ADS)
Hu, Y.; Ma, Y.; An, J.
2018-04-01
Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.
Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.
Maldonado, G; Greenland, S
1998-07-01
A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.
Migrant deaths at the Arizona-Mexico border: Spatial trends of a mass disaster.
Giordano, Alberto; Spradley, M Katherine
2017-11-01
Geographic Information Science (GIScience) technology has been used to document, investigate, and predict patterns that may be of utility in both forensic academic research and applied practice. In examining spatial and temporal trends of the mass disaster that is occurring along the U.S.-Mexico border, other researchers have highlighted predictive patterns for search and recovery efforts as well as water station placement. The purpose of this paper is to use previously collected spatial data of migrant deaths from Arizona to address issues of data uncertainty and data accuracy that affect our understanding of this phenomenon, including local and federal policies that impact the U.S.-Mexico border. The main objective of our study was to explore how the locations of migrant deaths have varied over time. Our results confirm patterns such as a lack of relationship between Border Patrol apprehensions and migrant deaths, as well as highlight new patterns such as the increased positional accuracy of migrant deaths recorded closer to the border. This paper highlights the importance of using positionally accurate data to detect spatio-temporal trends in forensic investigations of mass disasters: without qualitative and quantitative information concerning the accuracy of the data collected, the reliability of the results obtained remains questionable. We conclude by providing a set of guidelines for standardizing the collection and documentation of migrant remains at the U.S.-Mexico border. Copyright © 2017 Elsevier B.V. All rights reserved.
Using time series structural characteristics to analyze grain prices in food insecure countries
Davenport, Frank; Funk, Chris
2015-01-01
Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.
Rapid and Accurate Sequencing of Enterovirus Genomes Using MinION Nanopore Sequencer.
Wang, Ji; Ke, Yue Hua; Zhang, Yong; Huang, Ke Qiang; Wang, Lei; Shen, Xin Xin; Dong, Xiao Ping; Xu, Wen Bo; Ma, Xue Jun
2017-10-01
Knowledge of an enterovirus genome sequence is very important in epidemiological investigation to identify transmission patterns and ascertain the extent of an outbreak. The MinION sequencer is increasingly used to sequence various viral pathogens in many clinical situations because of its long reads, portability, real-time accessibility of sequenced data, and very low initial costs. However, information is lacking on MinION sequencing of enterovirus genomes. In this proof-of-concept study using Enterovirus 71 (EV71) and Coxsackievirus A16 (CA16) strains as examples, we established an amplicon-based whole genome sequencing method using MinION. We explored the accuracy, minimum sequencing time, discrimination and high-throughput sequencing ability of MinION, and compared its performance with Sanger sequencing. Within the first minute (min) of sequencing, the accuracy of MinION was 98.5% for the single EV71 strain and 94.12%-97.33% for 10 genetically-related CA16 strains. In as little as 14 min, 99% identity was reached for the single EV71 strain, and in 17 min (on average), 99% identity was achieved for 10 CA16 strains in a single run. MinION is suitable for whole genome sequencing of enteroviruses with sufficient accuracy and fine discrimination and has the potential as a fast, reliable and convenient method for routine use. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Gebker, Rolf; Mirelis, Jesus G; Jahnke, Cosima; Hucko, Thomas; Manka, Robert; Hamdan, Ashraf; Schnackenburg, Bernhard; Fleck, Eckart; Paetsch, Ingo
2010-09-01
The purpose of this study was to determine the influence of left ventricular (LV) hypertrophy and geometry on the diagnostic accuracy of wall motion and additional perfusion imaging during high-dose dobutamine/atropine stress magnetic resonance for the detection of coronary artery disease. Combined dobutamine stress magnetic resonance (DSMR)-wall motion and DSMR-perfusion imaging was performed in a single session in 187 patients scheduled for invasive coronary angiography. Patients were classified into 4 categories on the basis of LV mass (normal, ≤ 81 g/m(2) in men and ≤ 62 g/m(2) in women) and relative wall thickness (RWT) (normal, <0.45) as follows: normal geometry (normal mass, normal RWT), concentric remodeling (normal mass, increased RWT), concentric hypertrophy (increased mass, increased RWT), and eccentric hypertrophy (increased mass, normal RWT). Wall motion and perfusion images were interpreted sequentially, with observers blinded to other data. Significant coronary artery disease was defined as ≥ 70% stenosis. In patients with increased LV concentricity (defined by an RWT ≥ 0.45), sensitivity and accuracy of DSMR-wall motion were significantly reduced (63% and 73%, respectively; P<0.05) compared with patients without increased LV concentricity (90% and 88%, respectively; P<0.05). Although accuracy of DSMR-perfusion was higher than that of DSMR-wall motion in patients with concentric hypertrophy (82% versus 71%; P < 0.05), accuracy of DSMR-wall motion was superior to DSMR-perfusion (90% versus 85%; P < 0.05) in patients with eccentric hypertrophy. The accuracy of DSMR-wall motion is influenced by LV geometry. In patients with concentric remodeling and concentric hypertrophy, additional first-pass perfusion imaging during high-dose dobutamine stress improves the diagnostic accuracy for the detection of coronary artery disease.
Cognitive factors in the close visual and magnetic particle inspection of welds underwater.
Leach, J; Morris, P E
1998-06-01
Underwater close visual inspection (CVI) and magnetic particle inspection (MPI) are major components of the commercial diver's job of nondestructive testing and the maintenance of subsea structures. We explored the accuracy of CVI in Experiment 1 and that of MPI in Experiment 2 and observed high error rates (47% and 24%, respectively). Performance was strongly correlated with embedded figures and visual search tests and was unrelated to length of professional diving experience, formal inspection qualification, or age. Cognitive tests of memory for designs, spatial relations, dotted outlines, and block design failed to correlate with performance. Actual or potential applications of this research include more reliable inspection reporting, increased effectiveness from current inspection techniques, and directions for the refinement of subsea inspection equipment.
Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.
Bedggood, Phillip; Metha, Andrew
2010-01-01
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.
Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry
NASA Astrophysics Data System (ADS)
Bedggood, Phillip; Metha, Andrew
2010-11-01
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.
Inhomogeneous fluid of penetrable-spheres: Application of the random phase approximation
NASA Astrophysics Data System (ADS)
Xiang, Yan; Frydel, Derek
2017-05-01
The focus of the present work is the application of the random phase approximation (RPA), derived for inhomogeneous fluids [Frydel and Ma, Phys. Rev. E 93, 062112 (2016)], to penetrable-spheres. As penetrable-spheres transform into hard-spheres with increasing interactions, they provide an interesting case for exploring the RPA, its shortcomings, and limitations, the weak- versus the strong-coupling limit. Two scenarios taken up by the present study are a one-component and a two-component fluid with symmetric interactions. In the latter case, the mean-field contributions cancel out and any contributions from particle interactions are accounted for by correlations. The accuracy of the RPA for this case is the result of a somewhat lucky cancellation of errors.
3D Printed Shock Mitigating Structures
NASA Astrophysics Data System (ADS)
Schrand, Amanda; Elston, Edwin; Dennis, Mitzi; Metroke, Tammy; Chen, Chenggang; Patton, Steven; Ganguli, Sabyasachi; Roy, Ajit
Here we explore the durability, and shock mitigating potential, of solid and cellular 3D printed polymers and conductive inks under high strain rate, compressive shock wave and high g acceleration conditions. Our initial designs include a simple circuit with 4 resistors embedded into circular discs and a complex cylindrical gyroid shape. A novel ink consisting of silver-coated carbon black nanoparticles in a thermoplastic polyurethane was used as the trace material. One version of the disc structural design has the advantage of allowing disassembly after testing for direct failure analysis. After increasing impacts, printed and traditionally potted circuits were examined for functionality. Additionally, in the open disc design, trace cracking and delamination of resistors were able to be observed. In a parallel study, we examined the shock mitigating behavior of 3D printed cellular gyroid structures on a Split Hopkinson Pressure Bar (SHPB). We explored alterations to the classic SHPB setup for testing the low impedance, cellular samples to most accurately reflect the stress state inside the sample (strain rates from 700 to 1750 s-1). We discovered that the gyroid can effectively absorb the impact of the test resulting in crushing the structure. Future studies aim to tailor the unit cell dimensions for certain frequencies, increase print accuracy and optimize material compositions for conductivity and adhesion to manufacture more durable devices.
Risk assessment in the management of newly diagnosed classical Hodgkin lymphoma.
Connors, Joseph M
2015-03-12
Treatment of Hodgkin lymphoma is associated with 2 major types of risk: that the treatment may fail to cure the disease or that the treatment will prove unacceptably toxic. Careful assessment of the amount of the lymphoma (tumor burden), its behavior (extent of invasion or specific organ compromise), and host related factors (age; coincident systemic infection; and organ dysfunction, especially hematopoietic, cardiac, or pulmonary) is essential to optimize outcome. Elaborately assembled prognostic scoring systems, such as the International Prognostic Factors Project score, have lost their accuracy and value as increasingly effective chemotherapy and supportive care have been developed. Identification of specific biomarkers derived from sophisticated exploration of Hodgkin lymphoma biology is bringing promise of further improvement in targeted therapy in which effectiveness is increased at the same time off-target toxicity is diminished. Parallel developments in functional imaging are providing additional potential to evaluate the efficacy of treatment while it is being delivered, allowing dynamic assessment of risk during chemotherapy and adaptation of the therapy in real time. Risk assessment in Hodgkin lymphoma is continuously evolving, promising ever greater precision and clinical relevance. This article explores the past usefulness and the emerging potential of risk assessment for this imminently curable malignancy. © 2015 by The American Society of Hematology.
Chung, Rebecca K; Kim, Una Olivia; Basir, Mir Abdul
2018-04-01
To improve informed medical decision-making, principles for family-centered neonatal care recommend that parents have access to their child's medical record on an ongoing basis during neonatal intensive unit care (NICU) hospitalization. Currently, many NICUs do not allow independent parent access to their child's electronic medical record (EMR) during hospitalization. We undertook a cross-sectional survey pilot study of medical professionals and parents to explore opinions regarding this practice. Inclusion criteria: 18-years old, English-literate, legal guardian of patients admitted to the NICU for 14 days. NICU medical professionals included physicians, nurse practitioners, nurses, and respiratory therapists. Medical professionals believed parent access would make their work more difficult, increase time documenting and updating families, making them more liable to litigation and hesitant to chart sensitive information. However, parents felt that they lacked control over their child's care and desired direct access to the EMR. Parents believed this would improve accuracy of their child's medical chart, and increase advocacy and understanding of their child's illness. NICU parents and medical professionals have differing perspectives on independent parental access to their child's EMR. More research is needed to explore the potential of independent parental EMR access to further improve family-centered neonatal care.
Sutton, Jennifer E; Buset, Melanie; Keller, Mikayla
2014-01-01
A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context.
Sutton, Jennifer E.; Buset, Melanie; Keller, Mikayla
2014-01-01
A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context. PMID:24603608
Ultrasound visual feedback treatment and practice variability for residual speech sound errors
Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin
2014-01-01
Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938
NASA Technical Reports Server (NTRS)
Kiehl, J. T.; Briegleb, B. P.
1992-01-01
The clear sky greenhouse effect is defined in terms of the outgoing longwave clear sky flux at the top of the atmosphere. Recently, interest in the magnitude of the clear sky greenhouse effect has increased due to the archiving of the clear sky flux quantity through the Earth Radiation Budget Experiment (ERBE). The present study investigates to what degree of accuracy this flux can be analyzed by using independent atmospheric and surface data in conjunction with a detailed longwave radiation model. The conclusion from this comparison is that for most regions over oceans the analyzed fluxes agree to within the accuracy of the ERBE-retrieved fluxes (+/- 5 W/sq m). However, in regions where deep convective activity occurs, the ERBE fluxes are significantly higher (10-15 W/sq m) than the calculated fluxes. This bias can arise from either cloud contamination problems or variability in water vapor amount. It is argued that the use of analyzed fluxes may provide a more consistent clear sky flux data set for general circulation modeling validation. Climate implications from the analyzed fluxes are explored. Finally, results for obtaining longwave surface fluxes over the oceans are presented.
Liu, Guozheng; Zhao, Yusheng; Gowda, Manje; Longin, C. Friedrich H.; Reif, Jochen C.; Mette, Michael F.
2016-01-01
Bread-making quality traits are central targets for wheat breeding. The objectives of our study were to (1) examine the presence of major effect QTLs for quality traits in a Central European elite wheat population, (2) explore the optimal strategy for predicting the hybrid performance for wheat quality traits, and (3) investigate the effects of marker density and the composition and size of the training population on the accuracy of prediction of hybrid performance. In total 135 inbred lines of Central European bread wheat (Triticum aestivum L.) and 1,604 hybrids derived from them were evaluated for seven quality traits in up to six environments. The 135 parental lines were genotyped using a 90k single-nucleotide polymorphism array. Genome-wide association mapping initially suggested presence of several quantitative trait loci (QTLs), but cross-validation rather indicated the absence of major effect QTLs for all quality traits except of 1000-kernel weight. Genomic selection substantially outperformed marker-assisted selection in predicting hybrid performance. A resampling study revealed that increasing the effective population size in the estimation set of hybrids is relevant to boost the accuracy of prediction for an unrelated test population. PMID:27383841
Panoramic imaging and virtual reality — filling the gaps between the lines
NASA Astrophysics Data System (ADS)
Chapman, David; Deacon, Andrew
Close range photogrammetry projects rely upon a clear and unambiguous specification of end-user requirements to inform decisions relating to the format, coverage, accuracy and complexity of the final deliverable. Invariably such deliverables will be a partial and incomplete abstraction of the real world where the benefits of higher accuracy and increased complexity must be traded against the cost of the project. As photogrammetric technologies move into the digital era, computerisation offers opportunities for the photogrammetrist to revisit established mapping traditions in order to explore new markets. One such market is that for three-dimensional Virtual Reality (VR) models for clients who have previously had little exposure to the capabilities, and limitations, of photogrammetry and may have radically different views on the cost/benefit trade-offs in producing geometric models. This paper will present some examples of the authors' recent experience of such markets, drawn from a number of research and commercial projects directed towards the modelling of complex man-made objects. This experience seems to indicate that suitably configured digital image archives may form an important deliverable for a wide range of photogrammetric projects and supplement, or even replace, more traditional CAD models.
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach
Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-01
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892
L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-10-01
The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.
Stevenage, Sarah V; Bennett, Alice
2017-07-01
One study is presented which explores the biasing effects of irrelevant contextual information on a fingerprint matching task. Bias was introduced by providing the outcomes of a DNA test relating to each fictitious case under consideration. This was engineered to suggest either a match, no match, or an inconclusive outcome, and was thus either consistent, misleading or unbiased depending on the ground truth of each fingerprint pair. The results suggested that, when the difficulty of the fingerprint matching task was measurably increased, participants became more vulnerable to the biasing information. Under such conditions, when performance was good, misleading evidence lowered accuracy, and when performance was weaker, consistent evidence improved accuracy. As such, the results confirmed existing demonstrations of cognitive bias from contextual information in the fingerprint task. Moreover, by taking a process-based approach, it became possible to articulate the concerns, and the potential solutions, at each stage of the workflow. The results offer value for the forensic science community in extending the evidence-base regarding cognitive bias, and in articulating routes to improve the credibility of fingerprint decisions. Copyright © 2017. Published by Elsevier B.V.
Advancing Venus Geophysics with the NF4 VOX Gravity Investigation.
NASA Astrophysics Data System (ADS)
Iess, L.; Mazarico, E.; Andrews-Hanna, J. C.; De Marchi, F.; Di Achille, G.; Di Benedetto, M.; Smrekar, S. E.
2017-12-01
The Venus Origins Explorer is a JPL-led New Frontiers 4 mission proposal to Venus to answer critical questions about the origin and evolution of Venus. Venus stands out among other planets as Earth's twin planet, and is a natural target to better understand our own planet's place, in our own Solar System but also among the ever-increasing number of exoplanetary systems. The VOX radio science investigation will make use of an innovative Ka-band transponder provided by the Italian Space Agency (ASI) to map the global gravity field of Venus to much finer resolution and accuracy than the current knowledge, based on the NASA Magellan mission. We will present the results of comprehensive simulations performed with the NASA GSFC orbit determination and geodetic parameter estimation software `GEODYN', based on a realistic mission scenario, tracking schedule, and high-fidelity Doppler tracking noise model. We will show how the achieved resolution and accuracy help fulfill the geophysical goals of the VOX mission, in particular through the mapping of subsurface crustal density or thickness variations that will inform the composition and origin of the tesserae and help ascertain the heat loss and importance of tectonism and subduction.
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Pine, Michael; Sonneborn, Mark; Schindler, Joe; Stanek, Michael; Maeda, Jared Lane; Hanlon, Carrie
2012-01-01
The imperative to achieve quality improvement and cost-containment goals is driving healthcare organizations to make better use of existing health information. One strategy, the construction of hybrid data sets combining clinical and administrative data, has strong potential to improve the cost-effectiveness of hospital quality reporting processes, improve the accuracy of quality measures and rankings, and strengthen data systems. Through a two-year contract with the Agency for Healthcare Research and Quality, the Minnesota Hospital Association launched a pilot project in 2007 to link hospital clinical information to administrative data. Despite some initial challenges, this project was successful. Results showed that the use of hybrid data allowed for more accurate comparisons of risk-adjusted mortality and risk-adjusted complications across Minnesota hospitals. These increases in accuracy represent an important step toward targeting quality improvement efforts in Minnesota and provide important lessons that are being leveraged through ongoing projects to construct additional enhanced data sets. We explore the implementation challenges experienced during the Minnesota Pilot Project and their implications for hospitals pursuing similar data-enhancement projects. We also highlight the key lessons learned from the pilot project's success.
Tournier, Marie; Molimard, Mathieu; Titier, Karine; Cougnard, Audrey; Bégaud, Bernard; Gbikpi-Benissan, Georges; Verdoux, Hélène
2007-07-30
Psychoactive substance use is a risk factor for suicidal behavior and current intoxication increases the likelihood of serious intentional drug overdose (IDO). The objective was to assess the accuracy of information on substance use recorded in medical charts using toxicological assays as a reference in subjects admitted for IDO to an emergency department. Patients (n=1190) consecutively admitted for IDO were included. Information on substance use was recorded in routine practice by the emergency staff and toxicological assays (cannabis, opiate, buprenorphine, amphetamine/ecstasy, cocaine, LSD) were carried out in urine samples collected as part of routine management. The information on substance use was recorded in medical charts for 24.4% of subjects. A third of subjects (27.5%) were positive for toxicological assays. Recorded substance use allowed correct classification of nearly 80% of subjects. However, specificity (88.6%) was better than sensitivity (54.2%). Compared with toxicological assays, medical records allowed identification of only half of the subjects with current substance use. The usefulness of systematic toxicological assays during hospitalization for IDO should be assessed in further studies exploring whether such information allows medical management to be modified and contributes to improving prognosis.
Multivariate prediction of upper limb prosthesis acceptance or rejection.
Biddiss, Elaine A; Chau, Tom T
2008-07-01
To develop a model for prediction of upper limb prosthesis use or rejection. A questionnaire exploring factors in prosthesis acceptance was distributed internationally to individuals with upper limb absence through community-based support groups and rehabilitation hospitals. A total of 191 participants (59 prosthesis rejecters and 132 prosthesis wearers) were included in this study. A logistic regression model, a C5.0 decision tree, and a radial basis function neural network were developed and compared in terms of sensitivity (prediction of prosthesis rejecters), specificity (prediction of prosthesis wearers), and overall cross-validation accuracy. The logistic regression and neural network provided comparable overall accuracies of approximately 84 +/- 3%, specificity of 93%, and sensitivity of 61%. Fitting time-frame emerged as the predominant predictor. Individuals fitted within two years of birth (congenital) or six months of amputation (acquired) were 16 times more likely to continue prosthesis use. To increase rates of prosthesis acceptance, clinical directives should focus on timely, client-centred fitting strategies and the development of improved prostheses and healthcare for individuals with high-level or bilateral limb absence. Multivariate analyses are useful in determining the relative importance of the many factors involved in prosthesis acceptance and rejection.
NASA Astrophysics Data System (ADS)
Becker-Reshef, I.; Justice, C. O.; Vermote, E.
2012-12-01
Up to date, reliable, global, information on crop production prospects is indispensible for informing and regulating grain markets and for instituting effective agricultural policies. The recent price surges in the global grain markets were in large part triggered by extreme weather events in primary grain export countries. These events raise important questions about the accuracy of current production forecasts and their role in market fluctuations, and highlight the deficiencies in the state of global agricultural monitoring. Satellite-based earth observations are increasingly utilized as a tool for monitoring agricultural production as they offer cost-effective, daily, global information on crop growth and extent and their utility for crop production forecasting has long been demonstrated. Within this context, the Group on Earth Observations developed the Global Agricultural Monitoring (GEOGLAM) initiative which was adopted by the G20 as part of the action plan on food price volatility and agriculture. The goal of GEOGLAM is to enhance agricultural production estimates through the use of Earth observations. This talk will explore the potential contribution of EO-based methods for improving the accuracy of early production estimates of main export countries within the framework of GEOGLAM.
Machine Learning and Data Mining for Comprehensive Test Ban Treaty Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, S; Vaidya, S
2009-07-30
The Comprehensive Test Ban Treaty (CTBT) is gaining renewed attention in light of growing worldwide interest in mitigating risks of nuclear weapons proliferation and testing. Since the International Monitoring System (IMS) installed the first suite of sensors in the late 1990's, the IMS network has steadily progressed, providing valuable support for event diagnostics. This progress was highlighted at the recent International Scientific Studies (ISS) Conference in Vienna in June 2009, where scientists and domain experts met with policy makers to assess the current status of the CTBT Verification System. A strategic theme within the ISS Conference centered on exploring opportunitiesmore » for further enhancing the detection and localization accuracy of low magnitude events by drawing upon modern tools and techniques for machine learning and large-scale data analysis. Several promising approaches for data exploitation were presented at the Conference. These are summarized in a companion report. In this paper, we introduce essential concepts in machine learning and assess techniques which could provide both incremental and comprehensive value for event discrimination by increasing the accuracy of the final data product, refining On-Site-Inspection (OSI) conclusions, and potentially reducing the cost of future network operations.« less
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.
Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-16
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.
Flux Renormalization in Constant Power Burnup Calculations
Isotalo, Aarno E.; Aalto Univ., Otaniemi; Davidson, Gregory G.; ...
2016-06-15
To more accurately represent the desired power in a constant power burnup calculation, the depletion steps of the calculation can be divided into substeps and the neutron flux renormalized on each substep to match the desired power. Here, this paper explores how such renormalization should be performed, how large a difference it makes, and whether using renormalization affects results regarding the relative performance of different neutronics–depletion coupling schemes. When used with older coupling schemes, renormalization can provide a considerable improvement in overall accuracy. With previously published higher order coupling schemes, which are more accurate to begin with, renormalization has amore » much smaller effect. Finally, while renormalization narrows the differences in the accuracies of different coupling schemes, their order of accuracy is not affected.« less
Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations
NASA Astrophysics Data System (ADS)
Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong
2017-08-01
Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.
MATISSE: A novel tool to access, visualize and analyse data from planetary exploration missions
NASA Astrophysics Data System (ADS)
Zinzi, A.; Capria, M. T.; Palomba, E.; Giommi, P.; Antonelli, L. A.
2016-04-01
The increasing number and complexity of planetary exploration space missions require new tools to access, visualize and analyse data to improve their scientific return. ASI Science Data Center (ASDC) addresses this request with the web-tool MATISSE (Multi-purpose Advanced Tool for the Instruments of the Solar System Exploration), allowing the visualization of single observation or real-time computed high-order products, directly projected on the three-dimensional model of the selected target body. Using MATISSE it will be no longer needed to download huge quantity of data or to write down a specific code for every instrument analysed, greatly encouraging studies based on joint analysis of different datasets. In addition the extremely high-resolution output, to be used offline with a Python-based free software, together with the files to be read with specific GIS software, makes it a valuable tool to further process the data at the best spatial accuracy available. MATISSE modular structure permits addition of new missions or tasks and, thanks to dedicated future developments, it would be possible to make it compliant to the Planetary Virtual Observatory standards currently under definition. In this context the recent development of an interface to the NASA ODE REST API by which it is possible to access to public repositories is set.
Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis
Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang
2016-01-01
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
Church, Peter C; Greer, Mary-Louise C; Cytter-Kuint, Ruth; Doria, Andrea S; Griffiths, Anne M; Turner, Dan; Walters, Thomas D; Feldman, Brian M
2017-05-01
Magnetic resonance enterography (MRE) is increasingly relied upon for noninvasive assessment of intestinal inflammation in Crohn disease. However very few studies have examined the diagnostic accuracy of individual MRE signs in children. We have created an MR-based multi-item measure of intestinal inflammation in children with Crohn disease - the Pediatric Inflammatory Crohn's MRE Index (PICMI). To inform item selection for this instrument, we explored the inter-rater agreement and diagnostic accuracy of individual MRE signs of inflammation in pediatric Crohn disease and compared our findings with the reference standards of the weighted Pediatric Crohn's Disease Activity Index (wPCDAI) and C-reactive protein (CRP). In this cross-sectional single-center study, MRE studies in 48 children with diagnosed Crohn disease (66% male, median age 15.5 years) were reviewed by two independent radiologists for the presence of 15 MRE signs of inflammation. Using kappa statistics we explored inter-rater agreement for each MRE sign across 10 anatomical segments of the gastrointestinal tract. We correlated MRE signs with the reference standards using correlation coefficients. Radiologists measured the length of inflamed bowel in each segment of the gastrointestinal tract. In each segment, MRE signs were scored as either binary (0-absent, 1-present), or ordinal (0-absent, 1-mild, 2-marked). These segmental scores were weighted by the length of involved bowel and were summed to produce a weighted score per patient for each MRE sign. Using a combination of wPCDAI≥12.5 and CRP≥5 to define active inflammation, we calculated area under the receiver operating characteristic curve (AUC) for each weighted MRE sign. Bowel wall enhancement, wall T2 hyperintensity, wall thickening and wall diffusion-weighted imaging (DWI) hyperintensity were most commonly identified. Inter-rater agreement was best for decreased motility and wall DWI hyperintensity (kappa≥0.64). Correlation between MRE signs and wPCDAI was higher than with CRP. AUC was highest (≥0.75) for ulcers, wall enhancement, wall thickening, wall T2 hyperintensity and wall DWI hyperintensity. Some MRE signs had good inter-rater agreement and AUC for detection of inflammation in children with Crohn disease.
Bouchard, Amy E; Corriveau, Hélène; Milot, Marie-Hélène
2017-08-01
Timing deficits can have a negative impact on the lives of survivors post-chronic stroke. Studies evaluating ways to improve timing post stroke are scarce. The goal of the study was to evaluate the impact of a single session of haptic guidance (HG) and error amplification (EA) robotic training interventions on the improvement of post-stroke timing accuracy. Thirty-four survivors post-chronic stroke were randomly assigned to HG or EA. Participants played a computerized pinball-like game with their affected hand positioned in a robot that either helped them perform better (HG) or worse (EA) during the task. A baseline and retention phase preceded and followed HG and EA, respectively, in order to assess their efficiency at improving absolute timing errors. The impact of the side of the stroke lesion on the participants' performance during the timing task was also explored for each training group. An improvement in timing performance was only noted following HG (8.9 ± 4.9 ms versus 7.8 ± 5.3 ms, p = 0.032). Moreover, for the EA group only, participants with a left-sided stroke lesion showed a worsening in performance as compared to those with a right-sided stroke lesion (p = 0.001). Helping survivors post-chronic stroke perform a timing-based task is beneficial to learning. Future studies should explore longer and more frequent HG training sessions in order to further promote post stroke motor recovery. Implications for Rehabilitation Timing is crucial for the accomplishment of daily tasks. The number of studies dedicated to improving timing is scarce in the literature, even though timing deficits are common post stroke. This innovative study evaluated the impact of a single session of haptic guidance-HG and error amplification-EA robotic training interventions on improvements in timing accuracy among survivors post chronic stroke. HG robotic training improves timing accuracy more than EA among survivors post chronic stroke.
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D.
2015-01-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset towards longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. PMID:25895910
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D
2015-10-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset toward longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. © The Author(s) 2015.
Alarcón-Ríos, Lucía; Velo-Antón, Guillermo; Kaliontzopoulou, Antigoni
2017-04-01
The study of morphological variation among and within taxa can shed light on the evolution of phenotypic diversification. In the case of urodeles, the dorso-ventral view of the head captures most of the ontogenetic and evolutionary variation of the entire head, which is a structure with a high potential for being a target of selection due to its relevance in ecological and social functions. Here, we describe a non-invasive procedure of geometric morphometrics for exploring morphological variation in the external dorso-ventral view of urodeles' head. To explore the accuracy of the method and its potential for describing morphological patterns we applied it to two populations of Salamandra salamandra gallaica from NW Iberia. Using landmark-based geometric morphometrics, we detected differences in head shape between populations and sexes, and an allometric relationship between shape and size. We also determined that not all differences in head shape are due to size variation, suggesting intrinsic shape differences across sexes and populations. These morphological patterns had not been previously explored in S. salamandra, despite the high levels of intraspecific diversity within this species. The methodological procedure presented here allows to detect shape variation at a very fine scale, and solves the drawbacks of using cranial samples, thus increasing the possibilities of using collection specimens and alive animals for exploring dorsal head shape variation and its evolutionary and ecological implications in urodeles. J. Morphol. 278:475-485, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Throwing speed and accuracy in baseball and cricket players.
Freeston, Jonathan; Rooney, Kieron
2014-06-01
Throwing speed and accuracy are both critical to sports performance but cannot be optimized simultaneously. This speed-accuracy trade-off (SATO) is evident across a number of throwing groups but remains poorly understood. The goal was to describe the SATO in baseball and cricket players and determine the speed that optimizes accuracy. 20 grade-level baseball and cricket players performed 10 throws at 80% and 100% of maximal throwing speed (MTS) toward a cricket stump. Baseball players then performed a further 10 throws at 70%, 80%, 90%, and 100% of MTS toward a circular target. Baseball players threw faster with greater accuracy than cricket players at both speeds. Both groups demonstrated a significant SATO as vertical error increased with increases in speed; the trade-off was worse for cricketers than baseball players. Accuracy was optimized at 70% of MTS for baseballers. Throwing athletes should decrease speed when accuracy is critical. Cricket players could adopt baseball-training practices to improve throwing performance.
Seli, Paul; Cheyne, James Allan; Smilek, Daniel
2012-03-01
In two studies of a GO-NOGO task assessing sustained attention, we examined the effects of (1) altering speed-accuracy trade-offs through instructions (emphasizing both speed and accuracy or accuracy only) and (2) auditory alerts distributed throughout the task. Instructions emphasizing accuracy reduced errors and changed the distribution of GO trial RTs. Additionally, correlations between errors and increasing RTs produced a U-function; excessively fast and slow RTs accounted for much of the variance of errors. Contrary to previous reports, alerts increased errors and RT variability. The results suggest that (1) standard instructions for sustained attention tasks, emphasizing speed and accuracy equally, produce errors arising from attempts to conform to the misleading requirement for speed, which become conflated with attention-lapse produced errors and (2) auditory alerts have complex, and sometimes deleterious, effects on attention. We argue that instructions emphasizing accuracy provide a more precise assessment of attention lapses in sustained attention tasks. Copyright © 2011 Elsevier Inc. All rights reserved.
Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry
NASA Astrophysics Data System (ADS)
Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki
2015-08-01
In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.
No evidence for unethical amnesia for imagined actions: A failed replication and extension.
Stanley, Matthew L; Yang, Brenda W; De Brigard, Felipe
2018-03-12
In a recent study, Kouchaki and Gino (2016) suggest that memory for unethical actions is impaired, regardless of whether such actions are real or imagined. However, as we argue in the current study, their claim that people develop "unethical amnesia" confuses two distinct and dissociable memory deficits: one affecting the phenomenology of remembering and another affecting memory accuracy. To further investigate whether unethical amnesia affects memory accuracy, we conducted three studies exploring unethical amnesia for imagined ethical violations. The first study (N = 228) attempts to directly replicate the only study from Kouchaki and Gino (2016) that includes a measure of memory accuracy. The second study (N = 232) attempts again to replicate these accuracy effects from Kouchaki and Gino (2016), while including several additional variables meant to potentially help in finding the effect. The third study (N = 228) is an attempted conceptual replication using the same paradigm as Kouchaki and Gino (2016), but with a new vignette describing a different moral violation. We did not find an unethical amnesia effect involving memory accuracy in any of our three studies. These results cast doubt upon the claim that memory accuracy is impaired for imagined unethical actions. Suggestions for further ways to study memory for moral and immoral actions are discussed.
ERIC Educational Resources Information Center
Cox, Philip L.
This material is an instructional unit on measuring and estimating. A variety of activities are used with manipulative devices, worksheets, and discussion questions included. Major topics are estimating lengths, accuracy of measurement, metric system, scale drawings, and conversion between different units. A teacher's guide is also available.…
Improving the accuracy of k-nearest neighbor using local mean based and distance weight
NASA Astrophysics Data System (ADS)
Syaliman, K. U.; Nababan, E. B.; Sitompul, O. S.
2018-03-01
In k-nearest neighbor (kNN), the determination of classes for new data is normally performed by a simple majority vote system, which may ignore the similarities among data, as well as allowing the occurrence of a double majority class that can lead to misclassification. In this research, we propose an approach to resolve the majority vote issues by calculating the distance weight using a combination of local mean based k-nearest neighbor (LMKNN) and distance weight k-nearest neighbor (DWKNN). The accuracy of results is compared to the accuracy acquired from the original k-NN method using several datasets from the UCI Machine Learning repository, Kaggle and Keel, such as ionosphare, iris, voice genre, lower back pain, and thyroid. In addition, the proposed method is also tested using real data from a public senior high school in city of Tualang, Indonesia. Results shows that the combination of LMKNN and DWKNN was able to increase the classification accuracy of kNN, whereby the average accuracy on test data is 2.45% with the highest increase in accuracy of 3.71% occurring on the lower back pain symptoms dataset. For the real data, the increase in accuracy is obtained as high as 5.16%.
Scalar excursions in large-eddy simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matheou, Georgios; Dimotakis, Paul E.
Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less
Scalar excursions in large-eddy simulations
Matheou, Georgios; Dimotakis, Paul E.
2016-08-31
Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain-computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain-computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.
Hong, Keum-Shik; Khan, Muhammad Jawad
2017-01-01
In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910
Phonological therapy in jargon aphasia: effects on naming and neologisms.
Bose, Arpita
2013-01-01
Jargon aphasia is one of the most intractable forms of aphasia with limited recommendation on amelioration of associated naming difficulties and neologisms. The few naming therapy studies that exist in jargon aphasia have utilized either semantic or phonological approaches, but the results have been equivocal. Moreover, the effect of therapy on the characteristics of neologisms is less explored. This study investigates the effectiveness of a phonological naming therapy (i.e., phonological component analysis-PCA) on picture-naming abilities and on quantitative and qualitative changes in neologisms for an individual with jargon aphasia (FF). FF showed evidence of jargon aphasia with severe naming difficulties and produced a very high proportion of neologisms. A single-subject multiple probe design across behaviours was employed to evaluate the effects of PCA therapy on the accuracy for three sets of words. In therapy, a phonological components analysis chart was used to identify five phonological components (i.e. rhymes, first sound, first sound associate, final sound and number of syllables) for each target word. Generalization effects-change in per cent accuracy and error pattern-were examined comparing pre- and post-therapy responses on the Philadelphia Naming Test, and these responses were analysed to explore the characteristics of the neologisms. The quantitative change in neologisms was measured by change in the proportion of neologisms from pre- to post-therapy and the qualitative change was indexed by the phonological overlap between target and neologism. As a consequence of PCA therapy, FF showed a significant improvement in his ability to name the treated items. His performance in maintenance and follow-up phases remained comparable with his performance during the therapy phases. Generalization to other naming tasks did not show a change in accuracy, but distinct differences in error pattern (an increase in proportion of real word responses and a decrease in proportion of neologisms) were observed. Notably, the decrease in neologisms occurred with a corresponding trend for increase in the phonological similarity between the neologisms and the targets. This study demonstrated the effectiveness of a phonological therapy for improving naming abilities and reducing the amount of neologisms in an individual with severe jargon aphasia. The positive outcome of this research is encouraging, as it provides evidence for effective therapies for jargon aphasia and also emphasizes that use of the quality and quantity of errors may provide a sensitive outcome measure to determine therapy effectiveness, in particular for client groups who are difficult to treat. © 2013 Royal College of Speech and Language Therapists.
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction
Bandeira e Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-01-01
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. PMID:28455415
Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.
Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose
2017-06-07
Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
Reversing the Course of Forgetting
White, K. Geoffrey; Brown, Glenn S
2011-01-01
Forgetting functions were generated for pigeons in a delayed matching-to-sample task, in which accuracy decreased with increasing retention-interval duration. In baseline training with dark retention intervals, accuracy was high overall. Illumination of the experimental chamber by a houselight during the retention interval impaired performance accuracy by increasing the rate of forgetting. In novel conditions, the houselight was lit at the beginning of a retention interval and then turned off partway through the retention interval. Accuracy was low at the beginning of the retention interval and then increased later in the interval. Thus the course of forgetting was reversed. Such a dissociation of forgetting from the passage of time is consistent with an interference account in which attention or stimulus control switches between the remembering task and extraneous events. PMID:21909163
NASA Astrophysics Data System (ADS)
Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang
2016-10-01
In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.
Orbit Determination Accuracy for Comets on Earth-Impacting Trajectories
NASA Technical Reports Server (NTRS)
Kay-Bunnell, Linda
2004-01-01
The results presented show the level of orbit determination accuracy obtainable for long-period comets discovered approximately one year before collision with Earth. Preliminary orbits are determined from simulated observations using Gauss' method. Additional measurements are incorporated to improve the solution through the use of a Kalman filter, and include non-gravitational perturbations due to outgassing. Comparisons between observatories in several different circular heliocentric orbits show that observatories in orbits with radii less than 1 AU result in increased orbit determination accuracy for short tracking durations due to increased parallax per unit time. However, an observatory at 1 AU will perform similarly if the tracking duration is increased, and accuracy is significantly improved if additional observatories are positioned at the Sun-Earth Lagrange points L3, L4, or L5. A single observatory at 1 AU capable of both optical and range measurements yields the highest orbit determination accuracy in the shortest amount of time when compared to other systems of observatories.
Lang, Shona; Armstrong, Nigel; Deshpande, Sohan; Ramaekers, Bram; Grimm, Sabine; de Kock, Shelley; Kleijnen, Jos; Westwood, Marie
2018-01-01
Objective To explore how the definition of the target condition and post hoc exclusion of participants can limit the usefulness of diagnostic accuracy studies. Methods We used data from a systematic review, conducted for a NICE diagnostic assessment of risk scores to inform secondary care decisions about specialist referral for women with suspected ovarian cancer, to explore how the definition of the target condition and post hoc exclusion of participants can limit the usefulness of diagnostic accuracy studies to inform clinical practice. Results Fourteen of the studies evaluated the ROMA score, nine used Abbott ARCHITECT tumour marker assays, five used Roche Elecsys. The summary sensitivity estimate (Abbott ARCHITECT) was highest, 95.1% (95% CI: 92.4 to 97.1%), where analyses excluded participants with borderline tumours or malignancies other than epithelial ovarian cancer and lowest, 75.0% (95% CI: 60.4 to 86.4%), where all participants were included. Results were similar for Roche Elecsys tumour marker assays. Although the number of patients involved was small, data from studies that reported diagnostic accuracy for both the whole study population and with post hoc exclusion of those with borderline or non-epithelial malignancies suggested that patients with borderline or malignancies other than epithelial ovarian cancer accounts for between 50 and 85% of false-negative ROMA scores. Conclusions Our results illustrate the potential consequences of inappropriate population selection in diagnostic studies; women with non-epithelial ovarian cancers or non-ovarian primaries, and those borderline tumours may be disproportionately represented among those with false negative, 'low risk' ROMA scores. These observations highlight the importance of giving careful consideration to how the target condition has been defined when assessing whether the diagnostic accuracy estimates reported in clinical studies will translate into clinical utility in real-world settings.
Performance map of a cluster detection test using extended power
2013-01-01
Background Conventional power studies possess limited ability to assess the performance of cluster detection tests. In particular, they cannot evaluate the accuracy of the cluster location, which is essential in such assessments. Furthermore, they usually estimate power for one or a few particular alternative hypotheses and thus cannot assess performance over an entire region. Takahashi and Tango developed the concept of extended power that indicates both the rate of null hypothesis rejection and the accuracy of the cluster location. We propose a systematic assessment method, using here extended power, to produce a map showing the performance of cluster detection tests over an entire region. Methods To explore the behavior of a cluster detection test on identical cluster types at any possible location, we successively applied four different spatial and epidemiological parameters. These parameters determined four cluster collections, each covering the entire study region. We simulated 1,000 datasets for each cluster and analyzed them with Kulldorff’s spatial scan statistic. From the area under the extended power curve, we constructed a map for each parameter set showing the performance of the test across the entire region. Results Consistent with previous studies, the performance of the spatial scan statistic increased with the baseline incidence of disease, the size of the at-risk population and the strength of the cluster (i.e., the relative risk). Performance was heterogeneous, however, even for very similar clusters (i.e., similar with respect to the aforementioned factors), suggesting the influence of other factors. Conclusions The area under the extended power curve is a single measure of performance and, although needing further exploration, it is suitable to conduct a systematic spatial evaluation of performance. The performance map we propose enables epidemiologists to assess cluster detection tests across an entire study region. PMID:24156765
Gender Differences in Motor Skills of the Overarm Throw
Gromeier, Michael; Koester, Dirk; Schack, Thomas
2017-01-01
In this cross-sectional study, the qualitative and quantitative throwing performance of male and female athletes (6 to 16 years of age) was analyzed. The goal of this study was to assess whether there were gender based qualitative and quantitative differences in throwing performance of young athletes, throughout three different age bands (childhood, pubescence, and adolescence). Furthermore, we explored whether all components of the throwing movement are equally affected by gender differences. Focus was placed on five essential components of action: trunk, forearm, humerus, stepping, and backswing. Therefore, children and adolescents (N = 96) were invited to throw three times from three different distances, while aiming at a target placed at shoulder height. The participants were aspiring athletes, competitive in the sport handball. For analyzing the quality of movement the component approach of Halverson and Roberton (1984) was used. The throwing accuracy was noted and used to evaluate the quantitative performance of the throwing movement. Throughout three different age bands, no statistically significant difference was found between genders in throwing accuracy, i.e., quantitative performance. Regarding the qualitative evaluation of the throwing movement, male and female athletes differed significantly. The component approach yielded higher scores for male than for female participants. As expected, with increasing age qualitative and quantitative performance of male and female athletes improved. These results suggest that there are gender-specific differences in qualitative throwing performance, but not necessarily in quantitative throwing performance. Exploration shows that differences in the qualitative throwing performance were seen in specific components of action. Male and female athletes demonstrated similar movement patterns in humerus and forearm actions, but differed in trunk, stepping, and backswing actions. PMID:28261142
Effects of atmospheric variations on acoustic system performance
NASA Technical Reports Server (NTRS)
Nation, Robert; Lang, Stephen; Olsen, Robert; Chintawongvanich, Prasan
1993-01-01
Acoustic propagation over medium to long ranges in the atmosphere is subject to many complex, interacting effects. Of particular interest at this point is modeling low frequency (less than 500 Hz) propagation for the purpose of predicting ranges and bearing accuracies at which acoustic sources can be detected. A simple means of estimating how much of the received signal power propagated directly from the source to the receiver and how much was received by turbulent scattering was developed. The correlations between the propagation mechanism and detection thresholds, beamformer bearing estimation accuracies, and beamformer processing gain of passive acoustic signal detection systems were explored.
Computational Approaches to Predict Indices of ...
As nutrient inputs increase, productivity increases and lakes transition from low trophic state (e.g., oligotrophic) to higher trophic states (e.g., eutrophic). These broad trophic state classifications are good predictors of ecosystem health and the potential for ecosystem services (e.g., recreation, aesthetics, and fisheries). Additionally, some ecosystem disservices, such as cyanobacteria blooms, are also associated with increased nutrient inputs. Thus, trophic state can be used as a proxy for cyanobacteria bloom risk. To explore this idea, we construct two random forest models of trophic state (as determined by chlorophyll a concentration). First we define an “All Variable” model that estimates trophic state with both in situ and universally available data, and then we reduce this to a “GIS Only” model that uses only the universally available data. The “All Variables” model had a root mean square error (RMSE) of 0.09 and R2 of 0.8; whereas, the “GIS Only” model was 0.22 and 0.48 for RMSE and R2, respectively. Examining the “GIS Only” model (i.e., the model that has broadest applicability) we see that in spite of lower overall accuracy, it still has better than even odds (i.e., prediction probability is > 50%) of being correct in more than 1091 of the 1138 lakes included in this model. The “GIS Only” model has tremendous potential for exploring spatial trends at the national level since the datasets required to parameterize the
Application of Structure-from-Motion photogrammetry in laboratory flumes
NASA Astrophysics Data System (ADS)
Morgan, Jacob A.; Brogan, Daniel J.; Nelson, Peter A.
2017-01-01
Structure-from-Motion (SfM) photogrammetry has become widely used for topographic data collection in field and laboratory studies. However, the relative performance of SfM against other methods of topographic measurement in a laboratory flume environment has not been systematically evaluated, and there is a general lack of guidelines for SfM application in flume settings. As the use of SfM in laboratory flume settings becomes more widespread, it is increasingly critical to develop an understanding of how to acquire and process SfM data for a given flume size and sediment characteristics. In this study, we: (1) compare the resolution and accuracy of SfM topographic measurements to terrestrial laser scanning (TLS) measurements in laboratory flumes of varying physical dimensions containing sediments of varying grain sizes; (2) explore the effects of different image acquisition protocols and data processing methods on the resolution and accuracy of topographic data derived from SfM techniques; and (3) provide general guidance for image acquisition and processing for SfM applications in laboratory flumes. To investigate the effects of flume size, sediment size, and photo overlap on the density and accuracy of SfM data, we collected topographic data using both TLS and SfM in five flumes with widths ranging from 0.22 to 6.71 m, lengths ranging from 9.14 to 30.48 m, and median sediment sizes ranging from 0.2 to 31 mm. Acquisition time, image overlap, point density, elevation data, and computed roughness parameters were compared to evaluate the performance of SfM against TLS. We also collected images of a pan of gravel where we varied the distance and angle between the camera and sediment in order to explore how photo acquisition affects the ability to capture grain-scale microtopographic features in SfM-derived point clouds. A variety of image combinations and SfM software package settings were also investigated to determine optimal processing techniques. Results from this study suggest that SfM provides topographic data of similar accuracy to TLS, at higher resolution and lower cost. We found that about 100pixels per grain are required to resolve grain-scale topography. We suggest protocols for image acquisition and SfM software settings to achieve best results when using SfM in laboratory settings. In general, convergent imagery, taken from a higher angle, with at least several overlapping images for each desired point in the flume will result in an acceptable point cloud.
Schiff, Rachel
2012-12-01
The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts.
NASA Astrophysics Data System (ADS)
Li, Rong; Zhao, Jianhui; Li, Fan
2009-07-01
Gyroscope used as surveying sensor in the oil industry has been proposed as a good technique for measurement-whiledrilling (MWD) to provide real-time monitoring of the position and the orientation of the bottom hole assembly (BHA).However, drifts in the measurements provided by gyroscope might be prohibitive for the long-term utilization of the sensor. Some usual methods such as zero velocity update procedure (ZUPT) introduced to limit these drifts seem to be time-consuming and with limited effect. This study explored an in-drilling dynamic -alignment (IDA) method for MWD which utilizes gyroscope. During a directional drilling process, there are some minutes in the rotary drilling mode when the drill bit combined with drill pipe are rotated about the spin axis in a certain speed. This speed can be measured and used to determine and limit some drifts of the gyroscope which pay great effort to the deterioration in the long-term performance. A novel laser assembly is designed on the wellhead to count the rotating cycles of the drill pipe. With this provided angular velocity of the drill pipe, drifts of gyroscope measurements are translated into another form that can be easy tested and compensated. That allows better and faster alignment and limited drifts during the navigation process both of which can reduce long-term navigation errors, thus improving the overall accuracy in INS-based MWD system. This article concretely explores the novel device on the wellhead designed to test the rotation of the drill pipe. It is based on laser testing which is simple and not expensive by adding a laser emitter to the existing drilling equipment. Theoretical simulations and analytical approximations exploring the IDA idea have shown improvement in the accuracy of overall navigation and reduction in the time required to achieve convergence. Gyroscope accuracy along the axis is mainly improved. It is suggested to use the IDA idea in the rotary mode for alignment. Several other practical aspects of implementing this approach are evaluated and compared.
Preschoolers Mistrust Ignorant and Inaccurate Speakers
ERIC Educational Resources Information Center
Koenig, Melissa A.; Harris, Paul L.
2005-01-01
Being able to evaluate the accuracy of an informant is essential to communication. Three experiments explored preschoolers' (N=119) understanding that, in cases of conflict, information from reliable informants is preferable to information from unreliable informants. In Experiment 1, children were presented with previously accurate and inaccurate…
Global Lunar Topography from the Deep Space Gateway for Science and Exploration
NASA Astrophysics Data System (ADS)
Archinal, B.; Gaddis, L.; Kirk, R.; Edmundson, K.; Stone, T.; Portree, D.; Keszthelyi, L.
2018-02-01
The Deep Space Gateway, in low lunar orbit, could be used to achieve a long standing goal of lunar science, collecting stereo images in two months to make a complete, uniform, high resolution, known accuracy, global topographic model of the Moon.
Orff Ensembles: Benefits, Challenges, and Solutions
ERIC Educational Resources Information Center
Taylor, Donald M.
2012-01-01
Playing Orff instruments provides students with a wide variety of opportunities to explore creative musicianship. This article examines the benefits of classroom instrument study, common challenges encountered, and viable teaching strategies to promote student success. The ability to remove notes from barred instruments makes note accuracy more…
area, which includes work on whole building energy modeling, cost-based optimization, model accuracy optimization tool used to provide support for the Building America program's teams and energy efficiency goals Colorado graduate student exploring enhancements to building optimization in terms of robustness and speed
Space astrometry project JASMINE
NASA Astrophysics Data System (ADS)
Gouda, N.; Kobayashi, Y.; Yamada, Y.; Yano, Y.; Jasmine Working Group
A Japanese plan for an infrared ( z-band: 0.9 m) space astrometry project, JASMINE, is introduced. JASMINE is a satellite (Japan Astrometry Satellite Mission for INfrared Exploration) to measure distances and apparent motions of stars in the bulge of the Milky Way with yet unprecedented precision. It will measure parallaxes and positions with an accuracy of 10 μarcsec and proper motions with an accuracy of 4 μarcsec/year for stars brighter than z = 14 mag. JASMINE will observe about 10 million stars belonging to the bulge component of our Galaxy. With a completely new "map of the Galactic bulge", it is expected that many new exciting scientific results will be obtained in various fields of astronomy. Presently, JASMINE is in the development phase, with a target launch date around 2015. Overall system (bus) design is presently ongoing, in cooperation with the Japanese Aerospace Exploration Agency (JAXA). Preliminary design of instruments, observing strategy, data reduction, and critical technical issues for JASMINE will be described.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-01-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517
New standards for reducing gravity data: The North American gravity database
Hinze, W. J.; Aiken, C.; Brozena, J.; Coakley, B.; Dater, D.; Flanagan, G.; Forsberg, R.; Hildenbrand, T.; Keller, Gordon R.; Kellogg, J.; Kucks, R.; Li, X.; Mainville, A.; Morin, R.; Pilkington, M.; Plouff, D.; Ravat, D.; Roman, D.; Urrutia-Fucugauchi, J.; Veronneau, M.; Webring, M.; Winester, D.
2005-01-01
The North American gravity database as well as databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revising procedures for calculating gravity anomalies, taking into account our enhanced computational power, improved terrain databases and datums, and increased interest in more accurately defining long-wavelength anomaly components. Users of the databases may note minor differences between previous and revised database values as a result of these procedures. Generally, the differences do not impact the interpretation of local anomalies but do improve regional anomaly studies. The most striking revision is the use of the internationally accepted terrestrial ellipsoid for the height datum of gravity stations rather than the conventionally used geoid or sea level. Principal facts of gravity observations and anomalies based on both revised and previous procedures together with germane metadata will be available on an interactive Web-based data system as well as from national agencies and data centers. The use of the revised procedures is encouraged for gravity data reduction because of the widespread use of the global positioning system in gravity fieldwork and the need for increased accuracy and precision of anomalies and consistency with North American and national databases. Anomalies based on the revised standards should be preceded by the adjective "ellipsoidal" to differentiate anomalies calculated using heights with respect to the ellipsoid from those based on conventional elevations referenced to the geoid. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
NASA Astrophysics Data System (ADS)
Stuhlmacher, M.; Wang, C.; Georgescu, M.; Tellman, B.; Balling, R.; Clinton, N. E.; Collins, L.; Goldblatt, R.; Hanson, G.
2016-12-01
Global representations of modern day urban land use and land cover (LULC) extent are becoming increasingly prevalent. Yet considerable uncertainties in the representation of built environment extent (i.e. global classifications generated from 250m resolution MODIS imagery or the United States' National Land Cover Database) remain because of the lack of a systematic, globally consistent methodological approach. We aim to increase resolution, accuracy, and improve upon past efforts by establishing a data-driven definition of the urban landscape, based on Landsat 5, 7 & 8 imagery and ancillary data sets. Continuous and discrete machine learning classification algorithms have been developed in Google Earth Engine (GEE), a powerful online cloud-based geospatial storage and parallel-computing platform. Additionally, thousands of ground truth points have been selected from high resolution imagery to fill in the previous lack of accurate data to be used for training and validation. We will present preliminary classification and accuracy assessments for select cities in the United States and Mexico. Our approach has direct implications for development of projected urban growth that is grounded on realistic identification of urbanizing hot-spots, with consequences for local to regional scale climate change, energy demand, water stress, human health, urban-ecological interactions, and efforts used to prioritize adaptation and mitigation strategies to offset large-scale climate change. Future work to apply the built-up detection algorithm globally and yearly is underway in a partnership between GEE, University of California in San Diego, and Arizona State University.
Spadafore, Maxwell; Najarian, Kayvan; Boyle, Alan P
2017-11-29
Transcription factors (TFs) form a complex regulatory network within the cell that is crucial to cell functioning and human health. While methods to establish where a TF binds to DNA are well established, these methods provide no information describing how TFs interact with one another when they do bind. TFs tend to bind the genome in clusters, and current methods to identify these clusters are either limited in scope, unable to detect relationships beyond motif similarity, or not applied to TF-TF interactions. Here, we present a proximity-based graph clustering approach to identify TF clusters using either ChIP-seq or motif search data. We use TF co-occurrence to construct a filtered, normalized adjacency matrix and use the Markov Clustering Algorithm to partition the graph while maintaining TF-cluster and cluster-cluster interactions. We then apply our graph structure beyond clustering, using it to increase the accuracy of motif-based TFBS searching for an example TF. We show that our method produces small, manageable clusters that encapsulate many known, experimentally validated transcription factor interactions and that our method is capable of capturing interactions that motif similarity methods might miss. Our graph structure is able to significantly increase the accuracy of motif TFBS searching, demonstrating that the TF-TF connections within the graph correlate with biological TF-TF interactions. The interactions identified by our method correspond to biological reality and allow for fast exploration of TF clustering and regulatory dynamics.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Lantz, Nicholas; Guindon, Bert; Jiao, Xianfen
2017-01-01
Accurate and frequent monitoring of land surface changes arising from oil and gas exploration and extraction is a key requirement for the responsible and sustainable development of these resources. Petroleum deposits typically extend over large geographic regions but much of the infrastructure required for oil and gas recovery takes the form of numerous small-scale features (e.g., well sites, access roads, etc.) scattered over the landscape. Increasing exploitation of oil and gas deposits will increase the presence of these disturbances in heavily populated regions. An object-based approach is proposed to utilize RapidEye satellite imagery to delineate well sites and related access roads in diverse complex landscapes, where land surface changes also arise from other human activities, such as forest logging and agriculture. A simplified object-based change vector approach, adaptable to operational use, is introduced to identify the disturbances on land based on red-green spectral response and spatial attributes of candidate object size and proximity to roads. Testing of the techniques has been undertaken with RapidEye multitemporal imagery in two test sites located at Alberta, Canada: one was a predominant natural forest landscape and the other landscape dominated by intensive agricultural activities. Accuracies of 84% and 73%, respectively, have been achieved for the identification of well site and access road infrastructure of the two sites based on fully automated processing. Limited manual relabeling of selected image segments can improve these accuracies to 95%.
Exploration of robust operating conditions in inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Tromp, John W.; Pomares, Mario; Alvarez-Prieto, Manuel; Cole, Amanda; Ying, Hai; Salin, Eric D.
2003-11-01
'Robust' conditions, as defined by Mermet and co-workers for inductively coupled plasma (ICP)-atomic emission spectrometry, minimize matrix effects on analyte signals, and are obtained by increasing power and reducing nebulizer gas flow. In ICP-mass spectrometry (MS), it is known that reduced nebulizer gas flow usually leads to more robust conditions such that matrix effects are reduced. In this work, robust conditions for ICP-MS have been determined by optimizing for accuracy in the determination of analytes in a multi-element solution with various interferents (Al, Ba, Cs, K, Na), by varying power, nebulizer gas flow, sample introduction rate and ion lens voltage. The goal of the work was to determine which operating parameters were the most important in reducing matrix effects, and whether different interferents yielded the same robust conditions. Reduction in nebulizer gas flow and in sample input rate led to a significantly decreased interference, while an increase in power seemed to have a lesser effect. Once the other parameters had been adjusted to their robust values, there was no additional improvement in accuracy attainable by adjusting the ion lens voltage. The robust conditions were universal, since, for all the interferents and analytes studied, the optimum was found at the same operating conditions. One drawback to the use of robust conditions was the slightly reduced sensitivity; however, in the context of 'intelligent' instruments, the concept of 'robust conditions' is useful in many cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rioja, M.; Dodson, R.; Malarecki, J.
2011-11-15
Space very long baseline interferometry (S-VLBI) observations at high frequencies hold the prospect of achieving the highest angular resolutions and astrometric accuracies, resulting from the long baselines between ground and satellite telescopes. Nevertheless, space-specific issues, such as limited accuracy in the satellite orbit reconstruction and constraints on the satellite antenna pointing operations, limit the application of conventional phase referencing. We investigate the feasibility of an alternative technique, source frequency phase referencing (SFPR), to the S-VLBI domain. With these investigations we aim to contribute to the design of the next generation of S-VLBI missions. We have used both analytical and simulationmore » studies to characterize the performance of SFPR in S-VLBI observations, applied to astrometry and increased coherence time, and compared these to results obtained using conventional phase referencing. The observing configurations use the specifications of the ASTRO-G mission for their starting point. Our results show that the SFPR technique enables astrometry at 43 GHz, using alternating observations with 22 GHz, regardless of the orbit errors, for most weathers and under a wide variety of conditions. The same applies to the increased coherence time for the detection of weak sources. Our studies show that the capability to carry out simultaneous dual frequency observations enables application to higher frequencies, and a general improvement of the performance in all cases, hence we recommend its consideration for S-VLBI programs.« less
Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.
Zhao, Qin
2012-01-01
The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
NASA Technical Reports Server (NTRS)
Sozer, Emre; Brehm, Christoph; Kiris, Cetin C.
2014-01-01
A survey of gradient reconstruction methods for cell-centered data on unstructured meshes is conducted within the scope of accuracy assessment. Formal order of accuracy, as well as error magnitudes for each of the studied methods, are evaluated on a complex mesh of various cell types through consecutive local scaling of an analytical test function. The tests highlighted several gradient operator choices that can consistently achieve 1st order accuracy regardless of cell type and shape. The tests further offered error comparisons for given cell types, leading to the observation that the "ideal" gradient operator choice is not universal. Practical implications of the results are explored via CFD solutions of a 2D inviscid standing vortex, portraying the discretization error properties. A relatively naive, yet largely unexplored, approach of local curvilinear stencil transformation exhibited surprisingly favorable properties
NASA Astrophysics Data System (ADS)
Liu, Yu-fang; Han, Xin; Shi, De-heng
2008-03-01
Based on the Kirchhoff's Law, a practical dual-wavelength fiber-optic colorimeter, with the optimal work wavelength centered at 2.1 μm and 2.3 μm is presented. The effect of the emissivity on the precision of the measured temperature has been explored under various circumstances (i.e. temperature, wavelength) and for different materials. In addition, by fitting several typical material emissivity-temperature dependencies curves, the influence of the irradiation (radiant flux originating from the surroundings) and the surface reflected radiation on the temperature accuracy is studied. The results show that the calibration of the measured temperature for reflected radiant energy is necessary especially in low target temperature or low target emissivity, and the temperature accuracy is suitable for requirements in the range of 400-1200K.
Verification of OpenSSL version via hardware performance counters
NASA Astrophysics Data System (ADS)
Bruska, James; Blasingame, Zander; Liu, Chen
2017-05-01
Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.
Double Resummation for Higgs Production
NASA Astrophysics Data System (ADS)
Bonvini, Marco; Marzani, Simone
2018-05-01
We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.
NASA Astrophysics Data System (ADS)
Bonnema, Matthew; Sikder, Safat; Miao, Yabin; Chen, Xiaodong; Hossain, Faisal; Ara Pervin, Ismat; Mahbubur Rahman, S. M.; Lee, Hyongki
2016-05-01
Growing population and increased demand for water is causing an increase in dam and reservoir construction in developing nations. When rivers cross international boundaries, the downstream stakeholders often have little knowledge of upstream reservoir operation practices. Satellite remote sensing in the form of radar altimetry and multisensor precipitation products can be used as a practical way to provide downstream stakeholders with the fundamentally elusive upstream information on reservoir outflow needed to make important and proactive water management decisions. This study uses a mass balance approach of three hydrologic controls to estimate reservoir outflow from satellite data at monthly and annual time scales: precipitation-induced inflow, evaporation, and reservoir storage change. Furthermore, this study explores the importance of each of these hydrologic controls to the accuracy of outflow estimation. The hydrologic controls found to be unimportant could potentially be neglected from similar future studies. Two reservoirs were examined in contrasting regions of the world, the Hungry Horse Reservoir in a mountainous region in northwest U.S. and the Kaptai Reservoir in a low-lying, forested region of Bangladesh. It was found that this mass balance method estimated the annual outflow of both reservoirs with reasonable skill. The estimation of monthly outflow from both reservoirs was however less accurate. The Kaptai basin exhibited a shift in basin behavior resulting in variable accuracy across the 9 year study period. Monthly outflow estimation from Hungry Horse Reservoir was compounded by snow accumulation and melt processes, reflected by relatively low accuracy in summer and fall, when snow processes control runoff. Furthermore, it was found that the important hydrologic controls for reservoir outflow estimation at the monthly time scale differs between the two reservoirs, with precipitation-induced inflow being the most important control for the Kaptai Reservoir and storage change being the most important for Hungry Horse Reservoir.
Diagnostic reasoning and underlying knowledge of students with preclinical patient contacts in PBL.
Diemers, Agnes D; van de Wiel, Margje W J; Scherpbier, Albert J J A; Baarveld, Frank; Dolmans, Diana H J M
2015-12-01
Medical experts have access to elaborate and integrated knowledge networks consisting of biomedical and clinical knowledge. These coherent knowledge networks enable them to generate more accurate diagnoses in a shorter time. However, students' knowledge networks are less organised and students have difficulties linking theory and practice and transferring acquired knowledge. Therefore we wanted to explore the development and transfer of knowledge of third-year preclinical students on a problem-based learning (PBL) course with real patient contacts. Before and after a 10-week PBL course with real patients, third-year medical students were asked to think out loud while diagnosing four types of paper patient problems (two course cases and two transfer cases), and explain the underlying pathophysiological mechanisms of the patient features. Diagnostic accuracy and time needed to think through the cases were measured. The think-aloud protocols were transcribed verbatim and different types of knowledge were coded and quantitatively analysed. The written pathophysiological explanations were translated into networks of concepts. Both the concepts and the links between concepts in students' networks were compared to model networks. Over the course diagnostic accuracy increased, case-processing time decreased, and students used less biomedical and clinical knowledge during diagnostic reasoning. The quality of the pathophysiological explanations increased: the students used more concepts, especially more model concepts, and they used fewer wrong concepts and links. The findings differed across course and transfer cases. The effects were generally less strong for transfer cases. Students' improved diagnostic accuracy and the improved quality of their knowledge networks suggest that integration of biomedical and clinical knowledge took place during a 10-week course. The differences between course and transfer cases demonstrate that transfer is complex and time-consuming. We therefore suggest offering students many varied patient contacts with the same underlying pathophysiological mechanism and encouraging students to link biomedical and clinical knowledge. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander
2016-04-01
There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002
Prediction of recovery of motor function after stroke.
Stinear, Cathy
2010-12-01
Stroke is a leading cause of disability. The ability to live independently after stroke depends largely on the reduction of motor impairment and the recovery of motor function. Accurate prediction of motor recovery assists rehabilitation planning and supports realistic goal setting by clinicians and patients. Initial impairment is negatively related to degree of recovery, but inter-individual variability makes accurate prediction difficult. Neuroimaging and neurophysiological assessments can be used to measure the extent of stroke damage to the motor system and predict subsequent recovery of function, but these techniques are not yet used routinely. The use of motor impairment scores and neuroimaging has been refined by two recent studies in which these investigations were used at multiple time points early after stroke. Voluntary finger extension and shoulder abduction within 5 days of stroke predicted subsequent recovery of upper-limb function. Diffusion-weighted imaging within 7 days detected the effects of stroke on caudal motor pathways and was predictive of lasting motor impairment. Thus, investigations done soon after stroke had good prognostic value. The potential prognostic value of cortical activation and neural plasticity has been explored for the first time by two recent studies. Functional MRI detected a pattern of cortical activation at the acute stage that was related to subsequent reduction in motor impairment. Transcranial magnetic stimulation enabled measurement of neural plasticity in the primary motor cortex, which was related to subsequent disability. These studies open interesting new lines of enquiry. WHERE NEXT?: The accuracy of prediction might be increased by taking into account the motor system's capacity for functional reorganisation in response to therapy, in addition to the extent of stroke-related damage. Improved prognostic accuracy could also be gained by combining simple tests of motor impairment with neuroimaging, genotyping, and neurophysiological assessment of neural plasticity. The development of algorithms to guide the sequential combinations of these assessments could also further increase accuracy, in addition to improving rehabilitation planning and outcomes. Copyright © 2010 Elsevier Ltd. All rights reserved.
Increase in the Accuracy of Calculating Length of Horizontal Cable SCS in Civil Engineering
NASA Astrophysics Data System (ADS)
Semenov, A.
2017-11-01
A modification of the method for calculating the horizontal cable consumption of SCS established at civil engineering facilities is proposed. The proposed procedure preserves the prototype simplicity and provides a 5 percent accuracy increase. The values of the achieved accuracy are justified, their compliance with the practice of real projects is proved. The method is brought to the level of the engineering algorithm and formalized in the form of 12/70 rule.
Correa, Katharina; Bangera, Rama; Figueroa, René; Lhorente, Jean P; Yáñez, José M
2017-01-31
Sea lice infestations caused by Caligus rogercresseyi are a main concern to the salmon farming industry due to associated economic losses. Resistance to this parasite was shown to have low to moderate genetic variation and its genetic architecture was suggested to be polygenic. The aim of this study was to compare accuracies of breeding value predictions obtained with pedigree-based best linear unbiased prediction (P-BLUP) methodology against different genomic prediction approaches: genomic BLUP (G-BLUP), Bayesian Lasso, and Bayes C. To achieve this, 2404 individuals from 118 families were measured for C. rogercresseyi count after a challenge and genotyped using 37 K single nucleotide polymorphisms. Accuracies were assessed using fivefold cross-validation and SNP densities of 0.5, 1, 5, 10, 25 and 37 K. Accuracy of genomic predictions increased with increasing SNP density and was higher than pedigree-based BLUP predictions by up to 22%. Both Bayesian and G-BLUP methods can predict breeding values with higher accuracies than pedigree-based BLUP, however, G-BLUP may be the preferred method because of reduced computation time and ease of implementation. A relatively low marker density (i.e. 10 K) is sufficient for maximal increase in accuracy when using G-BLUP or Bayesian methods for genomic prediction of C. rogercresseyi resistance in Atlantic salmon.
Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio
NASA Astrophysics Data System (ADS)
Nababan, A. A.; Sitompul, O. S.; Tulus
2018-04-01
K- Nearest Neighbor (KNN) is a good classifier, but from several studies, the result performance accuracy of KNN still lower than other methods. One of the causes of the low accuracy produced, because each attribute has the same effect on the classification process, while some less relevant characteristics lead to miss-classification of the class assignment for new data. In this research, we proposed Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio as a parameter to see the correlation between each attribute in the data and the Gain Ratio also will be used as the basis for weighting each attribute of the dataset. The accuracy of results is compared to the accuracy acquired from the original KNN method using 10-fold Cross-Validation with several datasets from the UCI Machine Learning repository and KEEL-Dataset Repository, such as abalone, glass identification, haberman, hayes-roth and water quality status. Based on the result of the test, the proposed method was able to increase the classification accuracy of KNN, where the highest difference of accuracy obtained hayes-roth dataset is worth 12.73%, and the lowest difference of accuracy obtained in the abalone dataset of 0.07%. The average result of the accuracy of all dataset increases the accuracy by 5.33%.
NASA Astrophysics Data System (ADS)
Pranger, Lawrence A.
This research explored the processing and properties of PNCs using a polyfurfural alcohol (PFA) matrix. The precursor for PFA, furfuryl alcohol (FA) is sourced from feedstocks rich in hemicellulose, such as corn cobs, oat hulls and wood. To exploit FA as a polymerizable solvent, cellulose whiskers (CW) and montmorillonite clay (MMT) were used as the nanoparticle phase. Results from PNC processing show that CW and MMT can be dispersed in the PFA matrix by means of insitu polymerization, without the use of surfactants or dilution in solvents. Both CW and MMT nanoparticles catalyze the polymerization of furfuryl alcohol (FA). Moreover, the insitu intercalative polymerization of FA in the interlayer galleries of MMT leads to the complete exfoliation of the MMT in the PFA matrix. CW and MMT both function as effective matrix modifiers, increasing the thermal stability of PFA nanocomposites compared to pure PFA polymer. The increased thermal stability is seen as significant increases in the onset of degradation and in residual weight at high temperature. This research also explored the surface functionalization of Cu, Ni and Pt substrates by self-assembly of a range of difunctional linker molecules. Characterization by XPS and PM-IRRAS indicate that diisocyanides and dicarboxylic acids both form chemically "sticky" surfaces after self-assembly on Cu and Ni. Sticky surfaces may provide a means of increasing nanoparticle dispersion in metal nanocluster filled PNCs, by increasing their interaction with the matrix polymer. Another potential application for sticky surfaces on Cu is in the ongoing miniaturization of circuit boards. The functionalization of Cu bond pad substrates with linker molecules may provide an alternate means of bonding components to their bond pads, with higher placement accuracy compared to solder bumps.
Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions.
Zhang, Jingji; Pavlov, Michael Y; Ehrenberg, Måns
2018-02-16
We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of 'monitoring bases' A1492, A1493 and G530 in 16S rRNA in favor of their 'activated' state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the 'intrinsic selectivity' of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Yeon Soo; Jeong, G. Y.; Sohn, D. -S.
U-Mo/Al dispersion fuel is currently under development in the DOE’s Material Management and Minimization program to convert HEU-fueled research reactors to LEU-fueled reactors. In some demanding conditions in high-power and high-performance reactors, large pores form in the interaction layers between the U-Mo fuel particles and the Al matrix, which pose a potential to cause fuel failure. In this study, comprehension of the formation and growth of these pores was explored. As a product, a model to predict pore growth and porosity increase was developed. Well-characterized in-pile data from reduced-size plates were used to fit the model parameters. A data setmore » of full-sized plates, independent and distinctively different from those used to fit the model parameters, was used to examine the accuracy of the model.« less
Cross-coupled control for all-terrain rovers.
Reina, Giulio
2013-01-08
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors.
A simulator study on information requirements for precision hovering
NASA Technical Reports Server (NTRS)
Lemons, J. L.; Dukes, T. A.
1975-01-01
A fixed base simulator study of an advanced helicopter instrument display utilizing translational acceleration, velocity and position information is reported. The simulation involved piloting a heavy helicopter using the Integrated Trajectory Error Display (ITED) in a precision hover task. The test series explored two basic areas. The effect on hover accuracy of adding acceleration information was of primary concern. Also of interest was the operators' ability to use degraded information derived from less sophisticated sources. The addition of translational acceleration to a display containing velocity and position information did not appear to improve the hover performance significantly. However, displayed acceleration information seemed to increase the damping of the man machine system. Finally, the pilots could use translational information synthesized from attitude and angular acceleration as effectively as perfect acceleration.
Classifying Structures in the ISM with Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Beaumont, Christopher; Goodman, A. A.; Williams, J. P.
2011-01-01
The processes which govern molecular cloud evolution and star formation often sculpt structures in the ISM: filaments, pillars, shells, outflows, etc. Because of their morphological complexity, these objects are often identified manually. Manual classification has several disadvantages; the process is subjective, not easily reproducible, and does not scale well to handle increasingly large datasets. We have explored to what extent machine learning algorithms can be trained to autonomously identify specific morphological features in molecular cloud datasets. We show that the Support Vector Machine algorithm can successfully locate filaments and outflows blended with other emission structures. When the objects of interest are morphologically distinct from the surrounding emission, this autonomous classification achieves >90% accuracy. We have developed a set of IDL-based tools to apply this technique to other datasets.
DONG, DAO-RAN; HAO, MEI-NA; LI, CHENG; PENG, ZE; LIU, XIA; WANG, GUI-PING; MA, AN-LIN
2015-01-01
The aim of the present study was to investigate the combination of certain serological markers (Forns’ index; FI), FibroScan® and acoustic radiation force impulse elastography (ARFI) in the assessment of liver fibrosis in patients with hepatitis B, and to explore the impact of inflammatory activity and steatosis on the accuracy of these diagnostic methods. Eighty-one patients who had been diagnosed with hepatitis B were recruited and the stage of fibrosis was determined by biopsy. The diagnostic accuracy of FI, FibroScan and ARFI, as well as that of the combination of these methods, was evaluated based on the conformity of the results from these tests with those of biopsies. The effect of concomitant inflammation on diagnostic accuracy was also investigated by dividing the patients into two groups based on the grade of inflammation (G<2 and G≥2). The overall univariate correlation between steatosis and the diagnostic value of the three methods was also evaluated. There was a significant association between the stage of fibrosis and the results obtained using ARFI and FibroScan (Kruskal-Wallis; P<0.001 for all patients), and FI (t-test, P<0.001 for all patients). The combination of FI with ARFI/FibroScan increased the predictive accuracy with a fibrosis stage of S≥2 or cirrhosis. There was a significant correlation between the grade of inflammation and the results obtained using ARFI and FibroScan (Kruskal-Wallis, P<0.001 for all patients), and FI (t-test; P<0.001 for all patients). No significant correlation was detected between the measurements obtained using ARFI, FibroScan and FI, and steatosis (r=−0.100, P=0.407; r=0.170, P=0.163; and r=0.154, P=0.216, respectively). ARFI was shown to be as effective in the diagnosis of liver fibrosis as FibroScan or FI, and the combination of ARFI or FibroScan with FI may improve the accuracy of diagnosis. The presence of inflammatory activity, but not that of steatosis, may affect the diagnostic accuracy of these methods. PMID:25651500
Dong, Dao-Ran; Hao, Mei-Na; Li, Cheng; Peng, Ze; Liu, Xia; Wang, Gui-Ping; Ma, An-Lin
2015-06-01
The aim of the present study was to investigate the combination of certain serological markers (Forns' index; FI), FibroScan® and acoustic radiation force impulse elastography (ARFI) in the assessment of liver fibrosis in patients with hepatitis B, and to explore the impact of inflammatory activity and steatosis on the accuracy of these diagnostic methods. Eighty‑one patients who had been diagnosed with hepatitis B were recruited and the stage of fibrosis was determined by biopsy. The diagnostic accuracy of FI, FibroScan and ARFI, as well as that of the combination of these methods, was evaluated based on the conformity of the results from these tests with those of biopsies. The effect of concomitant inflammation on diagnostic accuracy was also investigated by dividing the patients into two groups based on the grade of inflammation (G<2 and G≥2). The overall univariate correlation between steatosis and the diagnostic value of the three methods was also evaluated. There was a significant association between the stage of fibrosis and the results obtained using ARFI and FibroScan (Kruskal‑Wallis; P<0.001 for all patients), and FI (t-test, P<0.001 for all patients). The combination of FI with ARFI/FibroScan increased the predictive accuracy with a fibrosis stage of S≥2 or cirrhosis. There was a significant correlation between the grade of inflammation and the results obtained using ARFI and FibroScan (Kruskal‑Wallis, P<0.001 for all patients), and FI (t-test; P<0.001 for all patients). No significant correlation was detected between the measurements obtained using ARFI, FibroScan and FI, and steatosis (r=‑0.100, P=0.407; r=0.170, P=0.163; and r=0.154, P=0.216, respectively). ARFI was shown to be as effective in the diagnosis of liver fibrosis as FibroScan or FI, and the combination of ARFI or FibroScan with FI may improve the accuracy of diagnosis. The presence of inflammatory activity, but not that of steatosis, may affect the diagnostic accuracy of these methods.
McMorris, Terry; Sproule, John; Turner, Anthony; Hale, Beverley J
2011-03-01
The purpose of this study was to compare, using meta-analytic techniques, the effect of acute, intermediate intensity exercise on the speed and accuracy of performance of working memory tasks. It was hypothesized that acute, intermediate intensity exercise would have a significant beneficial effect on response time and that effect sizes for response time and accuracy data would differ significantly. Random-effects meta-analysis showed a significant, beneficial effect size for response time, g=-1.41 (p<0.001) but a significant detrimental effect size, g=0.40 (p<0.01), for accuracy. There was a significant difference between effect sizes (Z(diff)=3.85, p<0.001). It was concluded that acute, intermediate intensity exercise has a strong beneficial effect on speed of response in working memory tasks but a low to moderate, detrimental one on accuracy. There was no support for a speed-accuracy trade-off. It was argued that exercise-induced increases in brain concentrations of catecholamines result in faster processing but increases in neural noise may negatively affect accuracy. 2010 Elsevier Inc. All rights reserved.
EEG channels reduction using PCA to increase XGBoost's accuracy for stroke detection
NASA Astrophysics Data System (ADS)
Fitriah, N.; Wijaya, S. K.; Fanany, M. I.; Badri, C.; Rezal, M.
2017-07-01
In Indonesia, based on the result of Basic Health Research 2013, the number of stroke patients had increased from 8.3 ‰ (2007) to 12.1 ‰ (2013). These days, some researchers are using electroencephalography (EEG) result as another option to detect the stroke disease besides CT Scan image as the gold standard. A previous study on the data of stroke and healthy patients in National Brain Center Hospital (RS PON) used Brain Symmetry Index (BSI), Delta-Alpha Ratio (DAR), and Delta-Theta-Alpha-Beta Ratio (DTABR) as the features for classification by an Extreme Learning Machine (ELM). The study got 85% accuracy with sensitivity above 86 % for acute ischemic stroke detection. Using EEG data means dealing with many data dimensions, and it can reduce the accuracy of classifier (the curse of dimensionality). Principal Component Analysis (PCA) could reduce dimensionality and computation cost without decreasing classification accuracy. XGBoost, as the scalable tree boosting classifier, can solve real-world scale problems (Higgs Boson and Allstate dataset) with using a minimal amount of resources. This paper reuses the same data from RS PON and features from previous research, preprocessed with PCA and classified with XGBoost, to increase the accuracy with fewer electrodes. The specific fewer electrodes improved the accuracy of stroke detection. Our future work will examine the other algorithm besides PCA to get higher accuracy with less number of channels.
Risto, Malte; Martens, Marieke H
2014-07-01
With specific headway instructions drivers are not able to attain the exact headways as instructed. In this study, the effects of discrete headway feedback (and the direction of headway adjustment) on headway accuracy for drivers carrying out time headway instructions were assessed experimentally. Two groups of each 10 participants (one receiving headway feedback; one control) carried out headway instructions in a driving simulator; increasing and decreasing their headway to a target headway of 2 s at speeds of 50, 80, and 100 km/h. The difference between the instructed and chosen headway was a measure for headway accuracy. The feedback group heard a sound signal at the moment that they crossed the distance of the instructed headway. Unsupported participants showed no significant difference in headway accuracy when increasing or decreasing headways. Discrete headway feedback had varying effects on headway choice accuracy. When participants decreased their headway, feedback led to higher accuracy. When increasing their headway, feedback led to a lower accuracy, compared to no headway feedback. Support did not affect driver's performance in maintaining the chosen headway. The present results suggest that (a) in its current form discrete headway feedback is not sufficient to improve the overall accuracy of chosen headways when carrying out headway instructions; (b) the effect of discrete headway feedback depends on the direction of headway adjustment. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Radio Galaxy Zoo: compact and extended radio source classification with deep learning
NASA Astrophysics Data System (ADS)
Lukic, V.; Brüggen, M.; Banfield, J. K.; Wong, O. I.; Rudnick, L.; Norris, R. P.; Simmons, B.
2018-05-01
Machine learning techniques have been increasingly useful in astronomical applications over the last few years, for example in the morphological classification of galaxies. Convolutional neural networks have proven to be highly effective in classifying objects in image data. In the context of radio-interferometric imaging in astronomy, we looked for ways to identify multiple components of individual sources. To this effect, we design a convolutional neural network to differentiate between different morphology classes using sources from the Radio Galaxy Zoo (RGZ) citizen science project. In this first step, we focus on exploring the factors that affect the performance of such neural networks, such as the amount of training data, number and nature of layers, and the hyperparameters. We begin with a simple experiment in which we only differentiate between two extreme morphologies, using compact and multiple-component extended sources. We found that a three-convolutional layer architecture yielded very good results, achieving a classification accuracy of 97.4 per cent on a test data set. The same architecture was then tested on a four-class problem where we let the network classify sources into compact and three classes of extended sources, achieving a test accuracy of 93.5 per cent. The best-performing convolutional neural network set-up has been verified against RGZ Data Release 1 where a final test accuracy of 94.8 per cent was obtained, using both original and augmented images. The use of sigma clipping does not offer a significant benefit overall, except in cases with a small number of training images.
Davey, James A; Chica, Roberto A
2015-04-01
Computational protein design (CPD) predictions are highly dependent on the structure of the input template used. However, it is unclear how small differences in template geometry translate to large differences in stability prediction accuracy. Herein, we explored how structural changes to the input template affect the outcome of stability predictions by CPD. To do this, we prepared alternate templates by Rotamer Optimization followed by energy Minimization (ROM) and used them to recapitulate the stability of 84 protein G domain β1 mutant sequences. In the ROM process, side-chain rotamers for wild-type (WT) or mutant sequences are optimized on crystal or nuclear magnetic resonance (NMR) structures prior to template minimization, resulting in alternate structures termed ROM templates. We show that use of ROM templates prepared from sequences known to be stable results predominantly in improved prediction accuracy compared to using the minimized crystal or NMR structures. Conversely, ROM templates prepared from sequences that are less stable than the WT reduce prediction accuracy by increasing the number of false positives. These observed changes in prediction outcomes are attributed to differences in side-chain contacts made by rotamers in ROM templates. Finally, we show that ROM templates prepared from sequences that are unfolded or that adopt a nonnative fold result in the selective enrichment of sequences that are also unfolded or that adopt a nonnative fold, respectively. Our results demonstrate the existence of a rotamer bias caused by the input template that can be harnessed to skew predictions toward sequences displaying desired characteristics. © 2014 The Protein Society.
Impulsivity modulates performance under response uncertainty in a reaching task.
Tzagarakis, C; Pellizzer, G; Rogers, R D
2013-03-01
We sought to explore the interaction of the impulsivity trait with response uncertainty. To this end, we used a reaching task (Pellizzer and Hedges in Exp Brain Res 150:276-289, 2003) where a motor response direction was cued at different levels of uncertainty (1 cue, i.e., no uncertainty, 2 cues or 3 cues). Data from 95 healthy adults (54 F, 41 M) were analysed. Impulsivity was measured using the Barratt Impulsiveness Scale version 11 (BIS-11). Behavioral variables recorded were reaction time (RT), errors of commission (referred to as 'early errors') and errors of precision. Data analysis employed generalised linear mixed models and generalised additive mixed models. For the early errors, there was an interaction of impulsivity with uncertainty and gender, with increased errors for high impulsivity in the one-cue condition for women and the three-cue condition for men. There was no effect of impulsivity on precision errors or RT. However, the analysis of the effect of RT and impulsivity on precision errors showed a different pattern for high versus low impulsives in the high uncertainty (3 cue) condition. In addition, there was a significant early error speed-accuracy trade-off for women, primarily in low uncertainty and a 'reverse' speed-accuracy trade-off for men in high uncertainty. These results extend those of past studies of impulsivity which help define it as a behavioural trait that modulates speed versus accuracy response styles depending on environmental constraints and highlight once more the importance of gender in the interplay of personality and behaviour.
Simple and conditional visual discrimination with wheel running as reinforcement in rats.
Iversen, I H
1998-09-01
Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.
NASA Astrophysics Data System (ADS)
Ha, Minsu; Nehm, Ross H.
2016-06-01
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a pervasive feature of human-generated text and that despite improvements, spell-check and auto-replace programs continue to be characterized by significant errors. Our study explored four research questions relating to MSW and text-based computer assessments: (1) Do English language learners (ELLs) produce equivalent magnitudes and types of spelling errors as non-ELLs? (2) To what degree do MSW impact concept-specific computer scoring rules? (3) What impact do MSW have on computer scoring accuracy? and (4) Are MSW more likely to impact false-positive or false-negative feedback to students? We found that although ELLs produced twice as many MSW as non-ELLs, MSW were relatively uncommon in our corpora. The MSW in the corpora were found to be important features of the computer scoring models. Although MSW did not significantly or meaningfully impact computer scoring efficacy across nine different computer scoring models, MSW had a greater impact on the scoring algorithms for naïve ideas than key concepts. Linguistic and concept redundancy in student responses explains the weak connection between MSW and scoring accuracy. Lastly, we found that MSW tend to have a greater impact on false-positive feedback. We discuss the implications of these findings for the development of next-generation science assessments.
Marraccini, Marisa E.; Weyandt, Lisa L.; Rossi, Joseph S.; Gudmundsdottir, Bergljot Gyda
2016-01-01
Increasing numbers of adults, particularly college students, are misusing prescription stimulants primarily for cognitive/academic enhancement, so it is critical to explore whether empirical findings support neurocognitive benefits of prescription stimulants. Previous meta-analytic studies have supported small benefits from prescription stimulants for the cognitive domains of inhibitory control and memory; however, no meta-analytic studies have examined the effects on processing speed or the potential impairment on other domains of cognition, including planning, decision-making, and cognitive perseveration. Therefore, the present study conducted a meta-analysis of the available literature examining the effects of prescription stimulants on specific measures of processing speed, planning, decision-making, and cognitive perseveration among healthy adult populations. The meta-analysis results indicated a positive influence of prescription stimulant medication on processing speed accuracy, with an overall mean effect size of g = 0.282 (95% CI 0.077, 0.488; n = 345). Neither improvements nor impairments were revealed for planning time, planning accuracy, advantageous decision-making, or cognitive perseveration; however findings are limited by the small number of studies examining these outcomes. Findings support that prescription stimulant medication may indeed act as a neurocognitive enhancer for accuracy measures of processing speed without impeding other areas of cognition. Considering that adults are already engaging in illegal use of prescription stimulants for academic enhancement, as well as the potential for stimulant misuse to have serious side effects, the establishment of public policies informed by interdisciplinary research surrounding this issue, whether restrictive or liberal, is of critical importance. PMID:27454675
NASA Astrophysics Data System (ADS)
Kimuli, Daniel; Wang, Wei; Wang, Wei; Jiang, Hongzhe; Zhao, Xin; Chu, Xuan
2018-03-01
A short-wave infrared (SWIR) hyperspectral imaging system (1000-2500 nm) combined with chemometric data analysis was used to detect aflatoxin B1 (AFB1) on surfaces of 600 kernels of four yellow maize varieties from different States of the USA (Georgia, Illinois, Indiana and Nebraska). For each variety, four AFB1 solutions (10, 20, 100 and 500 ppb) were artificially deposited on kernels and a control group was generated from kernels treated with methanol solution. Principal component analysis (PCA), partial least squares discriminant analysis (PLSDA) and factorial discriminant analysis (FDA) were applied to explore and classify maize kernels according to AFB1 contamination. PCA results revealed partial separation of control kernels from AFB1 contaminated kernels for each variety while no pattern of separation was observed among pooled samples. A combination of standard normal variate and first derivative pre-treatments produced the best PLSDA classification model with accuracy of 100% and 96% in calibration and validation, respectively, from Illinois variety. The best AFB1 classification results came from FDA on raw spectra with accuracy of 100% in calibration and validation for Illinois and Nebraska varieties. However, for both PLSDA and FDA models, poor AFB1 classification results were obtained for pooled samples relative to individual varieties. SWIR spectra combined with chemometrics and spectra pre-treatments showed the possibility of detecting maize kernels of different varieties coated with AFB1. The study further suggests that increase of maize kernel constituents like water, protein, starch and lipid in a pooled sample may have influence on detection accuracy of AFB1 contamination.
A Critical Review of Some Qualitative Research Methods Used to Explore Rater Cognition
ERIC Educational Resources Information Center
Suto, Irenka
2012-01-01
Internationally, many assessment systems rely predominantly on human raters to score examinations. Arguably, this facilitates the assessment of multiple sophisticated educational constructs, strengthening assessment validity. It can introduce subjectivity into the scoring process, however, engendering threats to accuracy. The present objectives…
ERIC Educational Resources Information Center
Schuster, Dwight
2008-01-01
Physical models in the classroom "cannot be expected to represent the full-scale phenomenon with complete accuracy, not even in the limited set of characteristics being studied" (AAAS 1990). Therefore, by modifying a popular classroom activity called a "planet walk," teachers can explore upper elementary students' current understandings; create an…
Middle School Children's Mathematical Reasoning and Proving Schemes
ERIC Educational Resources Information Center
Liu, Yating; Manouchehri, Azita
2013-01-01
In this work we explored proof schemes used by 41 middle school students when confronted with four mathematical propositions that demanded verification of accuracy of statements. The students' perception of mathematically complete vs. convincing arguments in different mathematics branches was also elicited. Lastly, we considered whether the…
Performance Metrics for Soil Moisture Retrievals and Applications Requirements
USDA-ARS?s Scientific Manuscript database
Quadratic performance metrics such as root-mean-square error (RMSE) and time series correlation are often used to assess the accuracy of geophysical retrievals and true fields. These metrics are generally related; nevertheless each has advantages and disadvantages. In this study we explore the relat...
NASA Astrophysics Data System (ADS)
Han, Yan; Kun, Zhang; Jin, Wang
2016-07-01
Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, have been successfully described by attractor dynamics. For decision making in the brain, a quantitative description of global attractor landscapes has not yet been completely given. Here, we developed a theoretical framework to quantify the landscape associated with the steady state probability distributions and associated steady state curl flux, measuring the degree of non-equilibrium through the degree of detailed balance breaking for decision making. We quantified the decision-making processes with optimal paths from the undecided attractor states to the decided attractor states, which are identified as basins of attractions, on the landscape. Both landscape and flux determine the kinetic paths and speed. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. Our theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results imply that there is an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered the possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key factors in the corresponding neural networks. Project supported by the National Natural Science Foundation of China (Grant Nos. 21190040, 91430217, and 11305176).
Weyand, Sabine; Takehara-Nishiuchi, Kaori; Chau, Tom
2015-10-30
Near-infrared spectroscopy (NIRS) brain-computer interfaces (BCIs) enable users to interact with their environment using only cognitive activities. This paper presents the results of a comparison of four methodological frameworks used to select a pair of tasks to control a binary NIRS-BCI; specifically, three novel personalized task paradigms and the state-of-the-art prescribed task framework were explored. Three types of personalized task selection approaches were compared, including: user-selected mental tasks using weighted slope scores (WS-scores), user-selected mental tasks using pair-wise accuracy rankings (PWAR), and researcher-selected mental tasks using PWAR. These paradigms, along with the state-of-the-art prescribed mental task framework, where mental tasks are selected based on the most commonly used tasks in literature, were tested by ten able-bodied participants who took part in five NIRS-BCI sessions. The frameworks were compared in terms of their accuracy, perceived ease-of-use, computational time, user preference, and length of training. Most notably, researcher-selected personalized tasks resulted in significantly higher accuracies, while user-selected personalized tasks resulted in significantly higher perceived ease-of-use. It was also concluded that PWAR minimized the amount of data that needed to be collected; while, WS-scores maximized user satisfaction and minimized computational time. In comparison to the state-of-the-art prescribed mental tasks, our findings show that overall, personalized tasks appear to be superior to prescribed tasks with respect to accuracy and perceived ease-of-use. The deployment of personalized rather than prescribed mental tasks ought to be considered and further investigated in future NIRS-BCI studies. Copyright © 2015 Elsevier B.V. All rights reserved.
Fitzpatrick, Megan J; Mathewson, Paul D; Porter, Warren P
2015-01-01
Mechanistic models provide a powerful, minimally invasive tool for gaining a deeper understanding of the ecology of animals across geographic space and time. In this paper, we modified and validated the accuracy of the mechanistic model Niche Mapper for simulating heat exchanges of animals with counter-current heat exchange mechanisms in their legs and animals that wade in water. We then used Niche Mapper to explore the effects of wading and counter-current heat exchange on the energy expenditures of Whooping Cranes, a long-legged wading bird. We validated model accuracy against the energy expenditure of two captive Whooping Cranes measured using the doubly-labeled water method and time energy budgets. Energy expenditure values modeled by Niche Mapper were similar to values measured by the doubly-labeled water method and values estimated from time-energy budgets. Future studies will be able to use Niche Mapper as a non-invasive tool to explore energy-based limits to the fundamental niche of Whooping Cranes and apply this knowledge to management decisions. Basic questions about the importance of counter-current exchange and wading to animal physiological tolerances can also now be explored with the model.
Fitzpatrick, Megan J.; Mathewson, Paul D.; Porter, Warren P.
2015-01-01
Mechanistic models provide a powerful, minimally invasive tool for gaining a deeper understanding of the ecology of animals across geographic space and time. In this paper, we modified and validated the accuracy of the mechanistic model Niche Mapper for simulating heat exchanges of animals with counter-current heat exchange mechanisms in their legs and animals that wade in water. We then used Niche Mapper to explore the effects of wading and counter-current heat exchange on the energy expenditures of Whooping Cranes, a long-legged wading bird. We validated model accuracy against the energy expenditure of two captive Whooping Cranes measured using the doubly-labeled water method and time energy budgets. Energy expenditure values modeled by Niche Mapper were similar to values measured by the doubly-labeled water method and values estimated from time-energy budgets. Future studies will be able to use Niche Mapper as a non-invasive tool to explore energy-based limits to the fundamental niche of Whooping Cranes and apply this knowledge to management decisions. Basic questions about the importance of counter-current exchange and wading to animal physiological tolerances can also now be explored with the model. PMID:26308207
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.
Yang, Wan; Olson, Donald R; Shaman, Jeffrey
2016-11-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.
NASA Astrophysics Data System (ADS)
Shafiee-Jood, M.; Cai, X.
2017-12-01
Advances in streamflow forecasts at different time scales offer a promise for proactive flood management and improved risk management. Despite the huge potential, previous studies have found that water resources managers are often not willing to incorporate streamflow forecasts information in decisions making, particularly in risky situations. While low accuracy of forecasts information is often cited as the main reason, some studies have found that implementation of streamflow forecasts sometimes is impeded by institutional obstacles and behavioral factors (e.g., risk perception). In fact, a seminal study by O'Connor et al. (2005) found that risk perception is the strongest determinant of forecast use while managers' perception about forecast reliability is not significant. In this study, we aim to address this issue again. However, instead of using survey data and regression analysis, we develop a theoretical framework to assess the user-perceived value of streamflow forecasts. The framework includes a novel behavioral component which incorporates both risk perception and perceived forecast reliability. The framework is then used in a hypothetical problem where reservoir operator should react to probabilistic flood forecasts with different reliabilities. The framework will allow us to explore the interactions among risk perception and perceived forecast reliability, and among the behavioral components and information accuracy. The findings will provide insights to improve the usability of flood forecasts information through better communication and education.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
Taveira-Gomes, Tiago; Prado-Costa, Rui; Severo, Milton; Ferreira, Maria Amélia
2015-01-24
Spaced-repetition and test-enhanced learning are two methodologies that boost knowledge retention. ALERT STUDENT is a platform that allows creation and distribution of Learning Objects named flashcards, and provides insight into student judgments-of-learning through a metric called 'recall accuracy'. This study aims to understand how the spaced-repetition and test-enhanced learning features provided by the platform affect recall accuracy, and to characterize the effect that students, flashcards and repetitions exert on this measurement. Three spaced laboratory sessions (s0, s1 and s2), were conducted with n=96 medical students. The intervention employed a study task, and a quiz task that consisted in mentally answering open-ended questions about each flashcard and grading recall accuracy. Students were randomized into study-quiz and quiz groups. On s0 both groups performed the quiz task. On s1 and s2, the study-quiz group performed the study task followed by the quiz task, whereas the quiz group only performed the quiz task. We measured differences in recall accuracy between groups/sessions, its variance components, and the G-coefficients for the flashcard component. At s0 there were no differences in recall accuracy between groups. The experiment group achieved a significant increase in recall accuracy that was superior to the quiz group in s1 and s2. In the study-quiz group, increases in recall accuracy were mainly due to the session, followed by flashcard factors and student factors. In the quiz group, increases in recall accuracy were mainly accounted by flashcard factors, followed by student and session factors. The flashcard G-coefficient indicated an agreement on recall accuracy of 91% in the quiz group, and of 47% in the study-quiz group. Recall accuracy is an easily collectible measurement that increases the educational value of Learning Objects and open-ended questions. This metric seems to vary in a way consistent with knowledge retention, but further investigation is necessary to ascertain the nature of such relationship. Recall accuracy has educational implications to students and educators, and may contribute to deliver tailored learning experiences, assess the effectiveness of instruction, and facilitate research comparing blended-learning interventions.
Scala, Carolina; Morlando, Maddalena; Familiari, Alessandra; Leone Roberti Maggiore, Umberto; Ferrero, Simone; D'Antonio, Francesco; Khalil, Asma
2017-01-01
Assessment of tricuspid flow has been reported to improve the performance of screening for aneuploidies and congenital heart defects (CHD). However, the performance of tricuspid regurgitation (TR) as a screening marker for CHD in euploid fetuses is yet to be established. The main aim of this meta-analysis was to establish the predictive accuracy of TR for CHD. MEDLINE, Embase, and the Cochrane Library were searched electronically utilizing combinations of the relevant medical subject heading for "fetus," "tricuspid regurgitation," and "first trimester." The outcomes explored were prevalence of TR in an euploid population, strength of association between TR and CHD, and predictive accuracy of TR for CHD in euploid fetuses. Summary estimates of sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratio for the overall predictive accuracy of TR for the detection of CHD were computed using the hierarchical summary receiver-operating characteristics model. A total of 452 articles were identified; 60 were assessed with respect to their eligibility for inclusion and a total of 4 studies were included in the study. TR was associated with an increased risk of CHD (RR: 9.6, 95% CI 2.8-33.5; I2: 92.7%). The strength of association between TR and CHD persisted when considering fetuses at risk for CHD, such as those with increased nuchal translucency (RR: 7.2, 95% CI 5.2-9.8; I2: 0%), while TR did not show any association with CHD when detected in a population at low risk for cardiac defects (RR: 9.3, 95% CI 0.8-111.8; I2: 93%). The overall diagnostic performance of TR in detecting CHD was poor in detecting CHD (sROC: 0.684, SE: 0.61) with a sensitivity of 35.2% (95% CI 26.9-44.1) and a specificity of 98.6% (95% CI 98.5-98.7). Detection of TR at the 11-14 weeks' scan showed a positive likelihood ratio of 7.2 (95% CI 5.3-9.8) in detecting CHD when applied to a population at risk for CHD such as fetuses with an increased nuchal translucency. The detection of TR in the first trimester increases the risk of CHD. However, isolated TR in the first trimester does not seem to be a strong predictor for CHD. © 2017 S. Karger AG, Basel.
Durning, Steven J; Dong, Ting; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Schuwirth, Lambert
2015-08-01
An ongoing debate exists in the medical education literature regarding the potential benefits of pattern recognition (non-analytic reasoning), actively comparing and contrasting diagnostic options (analytic reasoning) or using a combination approach. Studies have not, however, explicitly explored faculty's thought processes while tackling clinical problems through the lens of dual process theory to inform this debate. Further, these thought processes have not been studied in relation to the difficulty of the task or other potential mediating influences such as personal factors and fatigue, which could also be influenced by personal factors such as sleep deprivation. We therefore sought to determine which reasoning process(es) were used with answering clinically oriented multiple-choice questions (MCQs) and if these processes differed based on the dual process theory characteristics: accuracy, reading time and answering time as well as psychometrically determined item difficulty and sleep deprivation. We performed a think-aloud procedure to explore faculty's thought processes while taking these MCQs, coding think-aloud data based on reasoning process (analytic, nonanalytic, guessing or combination of processes) as well as word count, number of stated concepts, reading time, answering time, and accuracy. We also included questions regarding amount of work in the recent past. We then conducted statistical analyses to examine the associations between these measures such as correlations between frequencies of reasoning processes and item accuracy and difficulty. We also observed the total frequencies of different reasoning processes in the situations of getting answers correctly and incorrectly. Regardless of whether the questions were classified as 'hard' or 'easy', non-analytical reasoning led to the correct answer more often than to an incorrect answer. Significant correlations were found between self-reported recent number of hours worked with think-aloud word count and number of concepts used in the reasoning but not item accuracy. When all MCQs were included, 19 % of the variance of correctness could be explained by the frequency of expression of these three think-aloud processes (analytic, nonanalytic, or combined). We found evidence to support the notion that the difficulty of an item in a test is not a systematic feature of the item itself but is always a result of the interaction between the item and the candidate. Use of analytic reasoning did not appear to improve accuracy. Our data suggest that individuals do not apply either System 1 or System 2 but instead fall along a continuum with some individuals falling at one end of the spectrum.
The accuracy of new wheelchair users' predictions about their future wheelchair use.
Hoenig, Helen; Griffiths, Patricia; Ganesh, Shanti; Caves, Kevin; Harris, Frances
2012-06-01
This study examined the accuracy of new wheelchair user predictions about their future wheelchair use. This was a prospective cohort study of 84 community-dwelling veterans provided a new manual wheelchair. The association between predicted and actual wheelchair use was strong at 3 mos (ϕ coefficient = 0.56), with 90% of those who anticipated using the wheelchair at 3 mos still using it (i.e., positive predictive value = 0.96) and 60% of those who anticipated not using it indeed no longer using the wheelchair (i.e., negative predictive value = 0.60, overall accuracy = 0.92). Predictive accuracy diminished over time, with overall accuracy declining from 0.92 at 3 mos to 0.66 at 6 mos. At all time points, and for all types of use, patients better predicted use as opposed to disuse, with correspondingly higher positive than negative predictive values. Accuracy of prediction of use in specific indoor and outdoor locations varied according to location. This study demonstrates the importance of better understanding the potential mismatch between the anticipated and actual patterns of wheelchair use. The findings suggest that users can be relied upon to accurately predict their basic wheelchair-related needs in the short-term. Further exploration is needed to identify characteristics that will aid users and their providers in more accurately predicting mobility needs for the long-term.
Development and experiment of a broadband seismograph for deep exploration
NASA Astrophysics Data System (ADS)
Zhang, H.; Lin, J.; Yang, H.; Zheng, F.; Zhang, L.; Chen, Z.
2012-12-01
Seismic surveying is the most important type of deep exploration and oil-gas exploration. In order to obtain the high-quality deeper strata information in the deep exploration, large amount of drugs, large group interval and the low-frequency detector must be used, the length of the measuring line is usually tens of kilometers or even hundreds of kilometers. Conventional seismic exploration instrument generally do not have site storage function or limited storage capacity, due to the shackles of the transmission cable, the system bulky and difficult to handle, inefficient construction, high labor costs, collection capabilities and accuracy are the drawbacks of restrictions. This article describes a deep exploration of high-performance broadband seismograph. To ensure the quality of data acquisition, the 24-bit ADCs applied and the low noise analog front end circuit designed carefully, which enable the instrument noise level less than 1.5uV and the dynamic range over 120dB. Integrate dual-frequency GPS OEM board with the acquisition station. As a result, the acquisition station itself can make a static self-positioning and the horizontal accuracy can reach to centimeter-level. Furthermore, it can provide high accuracy position data to subsequent seismic data processing. Combine the precise timing system of GPS with digital clock that has high precision oven-controlled crystal oscillator (OCXO). It enables the accuracy of clock synchronization to reach 0.01ms and the stability of OCXO frequency reach 3e-8, which could solve the problems of synchronous triggering of the data acquisition unit of multiple recording units in the instrument and real-time calibration of the inaccuracy of system clock. The instrument uses a high-capacity (large than 16GB/station), high reliability of the seismic data storage solutions, which enables the instrument to record continuously for more than 138 hours at the sampling rate of 2000sps. Using low-power design techniques for power management in ether hardware or software, the average power consumption reached 2 watts, within a high-capacity lithium battery inside, the seismograph can work 80 hours continuously. With a internal 24-bit DAC and the FPGA control logic, a series of self-test items are achieved, including: noise level, the crosstalk between channels, common mode rejection ratio, harmonic distortion, detector impedance, impulse response, the gain calibration etc. Because the instrument Integrates a WIFI module inside, the instrument status and the quality of data acquisition can be real-time monitoring via a hand-held terminals. In order to verify the reliability and validity of the instrument, a deep seismic exploration research using the instruments provided in this article carried out in a certain area, 32 broadband seismograph were placed in the 120 km-long measure line (place one at intervals of about 4 km), to record the source signal far from a few hundred kilometers away. Experimental results show that performance of analog acquisition channels of the introduced instrument could reach the international advanced level. However, the non-cable designing makes the instrument get rid of the bulky cables and fulfill the target to lighten seismic instruments, which could definitely improve working efficiency, save surveying cost and be helpful to the work in the condition of complex geographical and geological environment.
Accuracy of a class of concurrent algorithms for transient finite element analysis
NASA Technical Reports Server (NTRS)
Ortiz, Michael; Sotelino, Elisa D.; Nour-Omid, Bahram
1988-01-01
The accuracy of a new class of concurrent procedures for transient finite element analysis is examined. A phase error analysis is carried out which shows that wave retardation leading to unacceptable loss of accuracy may occur if a Courant condition based on the dimensions of the subdomains is violated. Numerical tests suggest that this Courant condition is conservative for typical structural applications and may lead to a marked increase in accuracy as the number of subdomains is increased. Theoretical speed-up ratios are derived which suggest that the algorithms under consideration can be expected to exhibit a performance superior to that of globally implicit methods when implemented on parallel machines.
Learning to combine high variability with high precision: lack of transfer to a different task.
Wu, Yen-Hsun; Truglio, Thomas S; Zatsiorsky, Vladimir M; Latash, Mark L
2015-01-01
The authors studied effects of practicing a 4-finger accurate force production task on multifinger coordination quantified within the uncontrolled manifold hypothesis. During practice, task instability was modified by changing visual feedback gain based on accuracy of performance. The authors also explored the retention of these effects, and their transfer to a prehensile task. Subjects practiced the force production task for 2 days. After the practice, total force variability decreased and performance became more accurate. In contrast, variance of finger forces showed a tendency to increase during the first practice session while in the space of finger modes (hypothetical commands to fingers) the increase was under the significance level. These effects were retained for 2 weeks. No transfer of these effects to the prehensile task was seen, suggesting high specificity of coordination changes. The retention of practice effects without transfer to a different task suggests that further studies on a more practical method of improving coordination are needed.
Carter, William D.
1981-01-01
Launched in June 1978, Seasat operated for only 100 days, but successfully acquired much information over both sea and land. The collection of synthetic aperture radar (SAR) imagery and radar altimetry was particularly important to geologists. Although there are difficulties in processing and distributing these data in a timely manner, initial evaluations indicate that the radar imagery supplements Landsat data by increasing the spectral range and offering a different look angle. The radar altimeter provides accurate profiles over narrow strips of land (1 km wide) and has demonstrated usefulness in measuring icecap surfaces (Greenland, Iceland, and Antarctica). The Salar of Uyuni in southern Bolivia served as a calibration site for the altimeter and has enabled investigators to develop a land-based smoothing algorithm that is believed to increase the accuracy of the system to 10 cm. Data from the altimeter are currently being used to measure subsidence resulting from ground water withdrawal in the Phoenix-Tucson area.
Temperature dependence of thermal pressure for NaCl
NASA Astrophysics Data System (ADS)
Singh, Chandra K.; Pande, Brijesh K.; Pandey, Anjani K.
2018-05-01
Engineering applications of the materials can be explored upto the desired limit of accuracy with the better knowledge of its mechanical and thermal properties such as ductility, brittleness and Thermal Pressure. For the resistance to fracture (K) and plastic deformation (G) the ratio K/G is treated as an indication of ductile or brittle character of solids. In the present work we have tested the condition of ductility and brittleness with the calculated values of K/G for the NaCl. It is concluded that the nature of NaCl can be predicted upto high temperature simply with the knowledge of its elastic stiffness constant only. Thermoelastic properties of materials at high temperature is directly related to thermal pressure and volume expansion of the materials. An expression for the temperature dependence of thermal pressure is formulated using basic thermodynamic identities. It is observed that thermal pressure ΔPth calculated for NaCl by using Kushwah formulation is in good agreement with the experimental values also the thermal pressure increases with the increase in temperature.
The Extreme Ultraviolet Explorer mission
NASA Technical Reports Server (NTRS)
Malina, R. F.; Battel, S. J.
1989-01-01
The Extreme Ultraviolet Explorer (EUVE) mission will be the first user of NASA's new Explorer platform. The instrumentation included on this mission consists of three grazing incidence scanning telescopes, a deep survey instrument and an EUV spectrometer. The bandpass covered is 80 to 900 A. During the first six months of the mission, the scanning telescopes will be used to make all-sky maps in four bandpasses; astronomical sources wil be detected and their positions determined to an accuracy of 0.1 deg. The deep survey instrument will survey the sky with higher sensitivity along the ecliptic in two bandpasses between 80 and 500 A. Engineering and design aspects of the science payload and features of the instrument design are described.
NASA Astrophysics Data System (ADS)
Wang, Kunpeng; Tan, Handong
2017-11-01
Controlled-source audio-frequency magnetotellurics (CSAMT) has developed rapidly in recent years and are widely used in the area of mineral and oil resource exploration as well as other fields. The current theory, numerical simulation, and inversion research are based on the assumption that the underground media have resistivity isotropy. However a large number of rock and mineral physical property tests show the resistivity of underground media is generally anisotropic. With the increasing application of CSAMT, the demand for probe accuracy of practical exploration to complex targets continues to increase. The question of how to evaluate the influence of anisotropic resistivity to CSAMT response is becoming important. To meet the demand for CSAMT response research of resistivity anisotropic media, this paper examines the CSAMT electric equations, derives and realizes a three-dimensional (3D) staggered-grid finite difference numerical simulation method of CSAMT resistivity axial anisotropy. Through building a two-dimensional (2D) resistivity anisotropy geoelectric model, we validate the 3D computation result by comparing it to the result of controlled-source electromagnetic method (CSEM) resistivity anisotropy 2D finite element program. Through simulating a 3D resistivity axial anisotropy geoelectric model, we compare and analyze the responses of equatorial configuration, axial configuration, two oblique sources and tensor source. The research shows that the tensor source is suitable for CSAMT to recognize the anisotropic effect of underground structure.
DNA-Free Genetically Edited Grapevine and Apple Protoplast Using CRISPR/Cas9 Ribonucleoproteins.
Malnoy, Mickael; Viola, Roberto; Jung, Min-Hee; Koo, Ok-Jae; Kim, Seokjoong; Kim, Jin-Soo; Velasco, Riccardo; Nagamangala Kanchiswamy, Chidananda
2016-01-01
The combined availability of whole genome sequences and genome editing tools is set to revolutionize the field of fruit biotechnology by enabling the introduction of targeted genetic changes with unprecedented control and accuracy, both to explore emergent phenotypes and to introduce new functionalities. Although plasmid-mediated delivery of genome editing components to plant cells is very efficient, it also presents some drawbacks, such as possible random integration of plasmid sequences in the host genome. Additionally, it may well be intercepted by current process-based GMO regulations, complicating the path to commercialization of improved varieties. Here, we explore direct delivery of purified CRISPR/Cas9 ribonucleoproteins (RNPs) to the protoplast of grape cultivar Chardonnay and apple cultivar such as Golden delicious fruit crop plants for efficient targeted mutagenesis. We targeted MLO-7 , a susceptible gene in order to increase resistance to powdery mildew in grape cultivar and DIPM-1, DIPM-2 , and DIPM-4 in the apple to increase resistance to fire blight disease. Furthermore, efficient protoplast transformation, the molar ratio of Cas9 and sgRNAs were optimized for each grape and apple cultivar. The targeted mutagenesis insertion and deletion rate was analyzed using targeted deep sequencing. Our results demonstrate that direct delivery of CRISPR/Cas9 RNPs to the protoplast system enables targeted gene editing and paves the way to the generation of DNA-free genome edited grapevine and apple plants.
Accuracy of the Broselow Tape in South Sudan, "The Hungriest Place on Earth".
Clark, Melissa C; Lewis, Roger J; Fleischman, Ross J; Ogunniyi, Adedamola A; Patel, Dipesh S; Donaldson, Ross I
2016-01-01
The Broselow tape is a length-based tool used for the rapid estimation of pediatric weight and was developed to reduce dosage-related errors during emergencies. This study seeks to assess the accuracy of the Broselow tape and age-based formulas in predicting weights of South Sudanese children of varying nutritional status. This was a retrospective, cross-sectional study using data from existing acute malnutrition screening programs for children less than 5 years of age in South Sudan. Using anthropometric measurements, actual weights were compared with estimated weights from the Broselow tape and three age-based formulas. Mid-upper arm circumference was used to determine if each child was malnourished. Broselow accuracy was assessed by the percentage of measured weights falling into the same color zone as the predicted weight. For each method, accuracy was assessed by mean percentage error and percentage of predicted weights falling within 10% of actual weight. All data were analyzed by nutritional status subgroup. Only 10.7% of malnourished and 26.6% of nonmalnourished children had their actual weight fall within the Broselow color zone corresponding to their length. The Broselow method overestimated weight by a mean of 26.6% in malnourished children and 16.6% in nonmalnourished children (p < 0.001). Age-based formulas also overestimated weight, with mean errors ranging from 16.2% over actual weight (Advanced Pediatric Life Support in nonmalnourished children) to 70.9% over actual (Best Guess in severely malnourished children). The Broselow tape and age-based formulas selected for comparison were all markedly inaccurate in both the nonmalnourished and the malnourished populations studied, worsening with increasing malnourishment. Additional studies should explore appropriate methods of weight and dosage estimation for populations of low- and low-to-middle-income countries and regions with a high prevalence of malnutrition. © 2015 by the Society for Academic Emergency Medicine.
Swift Burst Alert Telescope (BAT) Instrument Response
NASA Technical Reports Server (NTRS)
Parsons, A.; Hullinger, D.; Markwardt, C.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Krimm, H.; Tueller, J.; Fenimore, E.; Palmer, D.
2004-01-01
The Burst Alert Telescope (BAT), a large coded aperture instrument with a wide field-of-view (FOV), provides the gamma-ray burst triggers and locations for the Swift Gamma-Ray Burst Explorer. In addition to providing this imaging information, BAT will perform a 15 keV - 150 keV all-sky hard x-ray survey based on the serendipitous pointings resulting from the study of gamma-ray bursts and will also monitor the sky for transient hard x-ray sources. For BAT to provide spectral and photometric information for the gamma-ray bursts, the transient sources and the all-sky survey, the BAT instrument response must be determined to an increasingly greater accuracy. In this talk, we describe the BAT instrument response as determined to an accuracy suitable for gamma-ray burst studies. We will also discuss the public data analysis tools developed to calculate the BAT response to sources at different energies and locations in the FOV. The level of accuracy required for the BAT instrument response used for the hard x-ray survey is significantly higher because this response must be used in the iterative clean algorithm for finding fainter sources. Because the bright sources add a lot of coding noise to the BAT sky image, fainter sources can be seen only after the counts due to the bright sources are removed. The better we know the BAT response, the lower the noise in the cleaned spectrum and thus the more sensitive the survey. Since the BAT detector plane consists of 32768 individual, 4 mm square CZT gamma-ray detectors, the most accurate BAT response would include 32768 individual detector response functions to separate mask modulation effects from differences in detector efficiencies! We describe OUT continuing work to improve the accuracy of the BAT instrument response and will present the current results of Monte Carlo simulations as well as BAT ground calibration data.
"Application of Tunable Diode Laser Spectrometry to Isotopic Studies for Exobiology"
NASA Technical Reports Server (NTRS)
Sauke, Todd B.
1999-01-01
Computer-controlled electrically-activated valves for rapid gas-handling have been incorporated into the Stable Isotope Laser Spectrometer (SILS) which now permits rapid filling and evacuating of the sample and reference gas cells, Experimental protocols have been developed to take advantage of the fast gas handling capabilities of the instrument and to achieve increased accuracy which results from reduced instrumental drift during rapid isotopic ratio measurements. Using these protocols' accuracies of 0.5 del (0.05%) have been achieved in measurements of 13C/12C in carbon dioxide. Using the small stable isotope laser spectrometer developed in a related PIDDP project of the Co-I, protocols for acquisition of rapid sequential calibration spectra were developed which resulted in 0.5 del accuracy also being achieved in this less complex instrument. An initial version of software for automatic characterization of tunable diode lasers has been developed and diodes have been characterized in order to establish their spectral output properties. A new state-of-the-art high operating temperature (200 K) mid infrared diode laser was purchased (through NASA procurement) and characterized. A thermo-electrically cooled mid infrared tunable diode laser system for use with high temperature operation lasers was developed. In addition to isotopic ratio measurements of carbon and oxygen, measurements of a third biologically important element (15N/14N in N2O gas) have been achieved to a preliminary accuracy of about 0.2%. Transfer of the basic SILS technology to the commercial sector is proceeding under an unfunded Space Act Agreement between NASA and SpiraMed, a medical diagnostic instrument company. Two patents have been issued. Foreign patents based on these two US patents have been applied for and are expected to be issued. A preliminary design was developed for a thermo-electrically cooled SILS instruments for application to planetary space flight exploration missions.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally grounded explanation. Finally, the model also provided an explanation as to why some studies have failed to report verbal overshadowing. Thus, the present study suggests it is not constructive to discuss whether verbal overshadowing exists or not in an all-or-none manner, and instead suggests a better experimental paradigm to further explore this phenomenon.
Hatano, Aya; Ueno, Taiji; Kitagami, Shinji; Kawaguchi, Jun
2015-01-01
Verbal overshadowing refers to a phenomenon whereby verbalization of non-verbal stimuli (e.g., facial features) during the maintenance phase (after the target information is no longer available from the sensory inputs) impairs subsequent non-verbal recognition accuracy. Two primary mechanisms have been proposed for verbal overshadowing, namely the recoding interference hypothesis, and the transfer-inappropriate processing shift. The former assumes that verbalization renders non-verbal representations less accurate. In contrast, the latter assumes that verbalization shifts processing operations to a verbal mode and increases the chance of failing to return to non-verbal, face-specific processing operations (i.e., intact, yet inaccessible non-verbal representations). To date, certain psychological phenomena have been advocated as inconsistent with the recoding-interference hypothesis. These include a decline in non-verbal memory performance following verbalization of non-target faces, and occasional failures to detect a significant correlation between the accuracy of verbal descriptions and the non-verbal memory performance. Contrary to these arguments against the recoding interference hypothesis, however, the present computational model instantiated core processing principles of the recoding interference hypothesis to simulate face recognition, and nonetheless successfully reproduced these behavioral phenomena, as well as the standard verbal overshadowing. These results demonstrate the plausibility of the recoding interference hypothesis to account for verbal overshadowing, and suggest there is no need to implement separable mechanisms (e.g., operation-specific representations, different processing principles, etc.). In addition, detailed inspections of the internal processing of the model clarified how verbalization rendered internal representations less accurate and how such representations led to reduced recognition accuracy, thereby offering a computationally grounded explanation. Finally, the model also provided an explanation as to why some studies have failed to report verbal overshadowing. Thus, the present study suggests it is not constructive to discuss whether verbal overshadowing exists or not in an all-or-none manner, and instead suggests a better experimental paradigm to further explore this phenomenon. PMID:26061046
Xia, Jiaqi; Peng, Zhenling; Qi, Dawei; Mu, Hongbo; Yang, Jianyi
2017-03-15
Protein fold classification is a critical step in protein structure prediction. There are two possible ways to classify protein folds. One is through template-based fold assignment and the other is ab-initio prediction using machine learning algorithms. Combination of both solutions to improve the prediction accuracy was never explored before. We developed two algorithms, HH-fold and SVM-fold for protein fold classification. HH-fold is a template-based fold assignment algorithm using the HHsearch program. SVM-fold is a support vector machine-based ab-initio classification algorithm, in which a comprehensive set of features are extracted from three complementary sequence profiles. These two algorithms are then combined, resulting to the ensemble approach TA-fold. We performed a comprehensive assessment for the proposed methods by comparing with ab-initio methods and template-based threading methods on six benchmark datasets. An accuracy of 0.799 was achieved by TA-fold on the DD dataset that consists of proteins from 27 folds. This represents improvement of 5.4-11.7% over ab-initio methods. After updating this dataset to include more proteins in the same folds, the accuracy increased to 0.971. In addition, TA-fold achieved >0.9 accuracy on a large dataset consisting of 6451 proteins from 184 folds. Experiments on the LE dataset show that TA-fold consistently outperforms other threading methods at the family, superfamily and fold levels. The success of TA-fold is attributed to the combination of template-based fold assignment and ab-initio classification using features from complementary sequence profiles that contain rich evolution information. http://yanglab.nankai.edu.cn/TA-fold/. yangjy@nankai.edu.cn or mhb-506@163.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Increasing Free Throw Accuracy through Behavior Modeling and Goal Setting.
ERIC Educational Resources Information Center
Erffmeyer, Elizabeth S.
A two-year behavior-modeling training program focusing on attention processes, retention processes, motor reproduction, and motivation processes was implemented to increase the accuracy of free throw shooting for a varsity intercollegiate women's basketball team. The training included specific learning keys, progressive relaxation, mental…
The Role of Text Memory in Inferencing and in Comprehension Deficits
ERIC Educational Resources Information Center
Hua, Anh N.; Keenan, Janice M.
2014-01-01
Comprehension tests often compare accuracy on inferential versus literal questions and find inferential harder than literal, and poor comprehenders performing worse than controls. Difficulties in integration are assumed to be the reason. This research explores another reason--differences in memory for the passage information underlying the…
Mothers' Estimates of Their Children with Disorders of Language Development
ERIC Educational Resources Information Center
Willinger, Ulrike; Eisenwort, Brigitte
2005-01-01
The authors' objective in this article was to explore the accuracy of mothers' estimates concerning their children's developmental functioning, especially with respect to vocabulary and gross motor development, by comparing the results of diagnostic tests administered to both the children and their mothers. The authors studied 55 children with…
Validation of Automated Scoring of Science Assessments
ERIC Educational Resources Information Center
Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C.
2016-01-01
Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…
Voice Recognition Software Accuracy with Second Language Speakers of English.
ERIC Educational Resources Information Center
Coniam, D.
1999-01-01
Explores the potential of the use of voice-recognition technology with second-language speakers of English. Involves the analysis of the output produced by a small group of very competent second-language subjects reading a text into the voice recognition software Dragon Systems "Dragon NaturallySpeaking." (Author/VWL)
School-Aged Children's Phonological Production of Derived English Words
ERIC Educational Resources Information Center
Jarmulowicz, Linda
2006-01-01
Purpose: Little is known about the phonological aspects of derivational processes. Neutral suffixes (e.g., "-ness") that do not change stress and rhythmic or nonneutral suffixes (e.g., "-ity") that alter stem stress were used in a production task that explored developmental changes in phonological accuracy of derived English…
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-19
... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] AER Energy Resources, Inc.; Alto Group Holdings, Inc.; Bizrocket.Com Inc.; Fox Petroleum, Inc.; Geopulse Explorations Inc.; Global Technologies... accuracy of press releases concerning the company's revenues. 4. Fox Petroleum, Inc. is a Nevada...
Assessing Family Economic Status From Teacher Reports.
ERIC Educational Resources Information Center
Moskowitz, Joel M.; Hoepfner, Ralph
The utility of employing teacher reports about characteristics of students and their parents to assess family economic status was investigated using multiple regression analyses. The accuracy of teacher reports about parents' educational background was also explored, in addition to the effect of replacing missing data with logical, mean, or modal…
The Communicative Ability of Universiti Teknologi MARA Sarawak's Graduates
ERIC Educational Resources Information Center
Hassan, Sharifah Zakiah Wan; Hakim, Simon Faizal; Rahim, Mahdalela; Noyem, John Francis; Ibrahim, Sueb; Ahmad, Johnny; Jusoff, Kamaruzaman
2009-01-01
This study explores Universiti Teknologi MARA (UiTM) Sarawak graduating students' oral proficiency, focusing on grammatical accuracy. Oral proficiency in English has always been the benchmark of language proficiency, and in the context of UiTM's language teaching curriculum, efforts to enhance students' oral proficiency are implemented through…
Reliability and Validity of Curriculum-Based Informal Reading Inventories.
ERIC Educational Resources Information Center
Fuchs, Lynn; And Others
A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…
Sentence Processing in High Proficient Kannada--English Bilinguals: A Reaction Time Study
ERIC Educational Resources Information Center
Ravi, Sunil Kumar; Chengappa, Shyamala K.
2015-01-01
The present study aimed at exploring the semantic and syntactic processing differences between native and second languages in 20 early high proficient Kannada--English bilingual adults through accuracy and reaction time (RT) measurements. Subjects participated in a semantic judgement task (using 50 semantically correct and 50 semantically…
Social Understanding of High-Ability Children in Middle and Late Childhood
ERIC Educational Resources Information Center
Boor-Klip, Henrike J.; Cillessen, Antonius H. N.; van Hell, Janet G.
2014-01-01
Despite its importance in social development, social understanding has hardly been studied in high-ability children. This study explores differences in social understanding between children in high-ability and regular classrooms, specifically theory of mind (ToM) and perception accuracy, as well as associations between individual characteristics…
Beyond Representation: Exploring Drawing as Part of Children's Meaning-Making
ERIC Educational Resources Information Center
Darling-McQuistan, Kirsten
2017-01-01
Drawing is an everyday feature of primary school classrooms. All too often however, its role within the classroom is limited to a "representational" one, used to demonstrate the accuracy of children's images and representations of the world. Furthermore, drawings, which most closely "match" objective, dominant perspectives are…
Adult Developmental Dyslexia in a Shallow Orthography: Are There Subgroups?
ERIC Educational Resources Information Center
Laasonen, Marja; Service, Elisabet; Lipsanen, Jari; Virsu, Veijo
2012-01-01
The existence and stability of subgroups among adult dyslexic readers of a shallow orthography was explored by comparing three different cluster analyses based on previously suggested combinations of two variables. These were oral reading speed versus accuracy, word versus pseudoword reading speed, and phonological awareness versus rapid naming.…
ERIC Educational Resources Information Center
Nehm, Ross H.; Ha, Minsu; Mayfield, Elijah
2012-01-01
This study explored the use of machine learning to automatically evaluate the accuracy of students' written explanations of evolutionary change. Performance of the Summarization Integrated Development Environment (SIDE) program was compared to human expert scoring using a corpus of 2,260 evolutionary explanations written by 565 undergraduate…
Gravity field, geoid and ocean surface by space techniques
NASA Technical Reports Server (NTRS)
Anderle, R. J.
1978-01-01
Knowledge of the earth's gravity field continued to increase during the last four years. Altimetry data from the GEOS-3 satellite has provided the geoid over most of the ocean to an accuracy of about one meter. Increasing amounts of laser data has permitted the solution for 566 terms in the gravity field with which orbits of the GEOS-3 satellite have been computed to an accuracy of about one to two meters. The combination of satellite tracking data, altimetry and gravimetry has yielded a solution for 1360 terms in the earth's gravity field. A number of problems remain to be solved to increase the accuracy of the gravity field determination. New satellite systems would provide gravity data in unsurveyed areas and correction for topographic features of the ocean and improved computational procedures together with a more extensive laser network will considerably improve the accuracy of the results.
Sex discrimination potential of buccolingual and mesiodistal tooth dimensions.
Acharya, Ashith B; Mainali, Sneedha
2008-07-01
Tooth crown dimensions are reasonably accurate predictors of sex and are useful adjuncts in sex assessment. This study explores the utility of buccolingual (BL) and mesiodistal (MD) measurements in sex differentiation when used independently. BL and MD measurements of 28 teeth (third molars excluded) were obtained from a group of 53 Nepalese subjects (22 women and 31 men) aged 19-28 years. Stepwise discriminant analyses were undertaken separately for both types of tooth crown variables and their accuracy in sex classification compared with one another. MD dimensions had recognizably greater accuracy (77.4-83%) in sex identification than BL measurements (62.3-64.2%)--results that are consistent with previous reports. However, the accuracy of MD variables is not high enough to warrant their exclusive use in odontometric sex assessment--higher accuracy levels have been obtained when both types of dimensions were used concurrently, implying that BL variables contribute to sex assessment to some extent. Hence, it is inferred that optimal results in dental sex assessment are obtained when both MD and BL variables are used together.
Ozbek, Müge; Bindemann, Markus
2011-10-01
The identification of unfamiliar faces has been studied extensively with matching tasks, in which observers decide if pairs of photographs depict the same person (identity matches) or different people (mismatches). In experimental studies in this field, performance is usually self-paced under the assumption that this will encourage best-possible accuracy. Here, we examined the temporal characteristics of this task by limiting display times and tracking observers' eye movements. Observers were required to make match/mismatch decisions to pairs of faces shown for 200, 500, 1000, or 2000ms, or for an unlimited duration. Peak accuracy was reached within 2000ms and two fixations to each face. However, intermixing exposure conditions produced a context effect that generally reduced accuracy on identity mismatch trials, even when unlimited viewing of faces was possible. These findings indicate that less than 2s are required for face matching when exposure times are variable, but temporal constraints should be avoided altogether if accuracy is truly paramount. The implications of these findings are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures
NASA Astrophysics Data System (ADS)
Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.
2017-12-01
Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.
Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui
2018-06-01
Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Yellen, H. W.
1983-03-01
Literature pertaining to Voice Recognition abounds with information relevant to the assessment of transitory speech recognition devices. In the past, engineering requirements have dictated the path this technology followed. But, other factors do exist that influence recognition accuracy. This thesis explores the impact of Human Factors on the successful recognition of speech, principally addressing the differences or variability among users. A Threshold Technology T-600 was used for a 100 utterance vocubalary to test 44 subjects. A statistical analysis was conducted on 5 generic categories of Human Factors: Occupational, Operational, Psychological, Physiological and Personal. How the equipment is trained and the experience level of the speaker were found to be key characteristics influencing recognition accuracy. To a lesser extent computer experience, time or week, accent, vital capacity and rate of air flow, speaker cooperativeness and anxiety were found to affect overall error rates.
Demand behavior and empathic accuracy in observed conflict interactions in couples.
Hinnekens, Céline; Ickes, William; Schryver, Maarten De; Verhofstadt, Lesley L
2016-01-01
The study reported in this research note sought to extend the research on motivated empathic accuracy by exploring whether intimate partners who are highly motivated to induce change in their partner during conflicts will be more empathically accurate than partners who are less motivated. In a laboratory experiment, the partners within 26 cohabiting couples were randomly assigned the role of conflict initiator. The partners provided questionnaire data, participated in a videotaped conflict interaction, and completed a video-review task. More blaming behavior was associated with higher levels of empathic accuracy, irrespective of whether one was the conflict initiator or not. The results also showed a two-way interaction indicating that initiators who applied more pressure on their partners to change were less empathically accurate than initiators who applied less pressure, whereas their partners could counter this pressure when they could accurately "read" the initiator's thoughts and feelings.
Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell
2014-01-01
Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.
High School Physics Students' Personal Epistemologies and School Science Practice
NASA Astrophysics Data System (ADS)
Alpaslan, Muhammet Mustafa; Yalvac, Bugrahan; Loving, Cathleen
2017-11-01
This case study explores students' physics-related personal epistemologies in school science practices. The school science practices of nine eleventh grade students in a physics class were audio-taped over 6 weeks. The students were also interviewed to find out their ideas on the nature of scientific knowledge after each activity. Analysis of transcripts yielded several epistemological resources that students activated in their school science practice. The findings show that there is inconsistency between students' definitions of scientific theories and their epistemological judgments. Analysis revealed that students used several epistemological resources to decide on the accuracy of their data including accuracy via following the right procedure and accuracy via what the others find. Traditional, formulation-based, physics instruction might have led students to activate naive epistemological resources that prevent them to participate in the practice of science in ways that are more meaningful. Implications for future studies are presented.
Liquid electrolyte informatics using an exhaustive search with linear regression.
Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato
2018-06-14
Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.
Rifai Chai; Naik, Ganesh R; Tran, Yvonne; Sai Ho Ling; Craig, Ashley; Nguyen, Hung T
2015-08-01
An electroencephalography (EEG)-based counter measure device could be used for fatigue detection during driving. This paper explores the classification of fatigue and alert states using power spectral density (PSD) as a feature extractor and fuzzy swarm based-artificial neural network (ANN) as a classifier. An independent component analysis of entropy rate bound minimization (ICA-ERBM) is investigated as a novel source separation technique for fatigue classification using EEG analysis. A comparison of the classification accuracy of source separator versus no source separator is presented. Classification performance based on 43 participants without the inclusion of the source separator resulted in an overall sensitivity of 71.67%, a specificity of 75.63% and an accuracy of 73.65%. However, these results were improved after the inclusion of a source separator module, resulting in an overall sensitivity of 78.16%, a specificity of 79.60% and an accuracy of 78.88% (p <; 0.05).
NASA Technical Reports Server (NTRS)
1975-01-01
The economic benefits of improved ocean condition, weather and ice forecasts by SEASAT satellites to the exploration, development and production of oil and natural gas in the offshore regions are considered. The results of case studies which investigate the effects of forecast accuracy on offshore operations in the North Sea, the Celtic Sea, and the Gulf of Mexico are reported. A methodology for generalizing the results to other geographic regions of offshore oil and natural gas exploration and development is described.
van der Esch, M; Knoop, J; Hunter, D J; Klein, J-P; van der Leeden, M; Knol, D L; Reiding, D; Voorneman, R E; Gerritsen, M; Roorda, L D; Lems, W F; Dekker, J
2013-05-01
Osteoarthritis (OA) of the knee is characterized by pain and activity limitations. In knee OA, proprioceptive accuracy is reduced and might be associated with pain and activity limitations. Although causes of reduced proprioceptive accuracy are divergent, medial meniscal abnormalities, which are highly prevalent in knee OA, have been suggested to play an important role. No study has focussed on the association between proprioceptive accuracy and meniscal abnormalities in knee OA. To explore the association between reduced proprioceptive accuracy and medial meniscal abnormalities in a clinical sample of knee OA subjects. Cross-sectional study in 105 subjects with knee OA. Knee proprioceptive accuracy was assessed by determining the joint motion detection threshold in the knee extension direction. The knee was imaged with a 3.0 T magnetic resonance (MR) scanner. Number of regions with medial meniscal abnormalities and the extent of abnormality in the anterior and posterior horn and body were scored according to the Boston-Leeds Osteoarthritis Knee Score (BLOKS) method. Multiple regression analyzes were used to examine whether reduced proprioceptive accuracy was associated with medial meniscal abnormalities in knee OA subjects. Mean proprioceptive accuracy was 2.9° ± 1.9°. Magnetic resonance imaging (MRI)-detected medial meniscal abnormalities were found in the anterior horn (78%), body (80%) and posterior horn (90%). Reduced proprioceptive accuracy was associated with both the number of regions with meniscal abnormalities (P < 0.01) and the extent of abnormality (P = 0.02). These associations were not confounded by muscle strength, joint laxity, pain, age, gender, body mass index (BMI) and duration of knee complaints. This is the first study showing that reduced proprioceptive accuracy is associated with medial meniscal abnormalities in knee OA. The study highlights the importance of meniscal abnormalities in understanding reduced proprioceptive accuracy in persons with knee OA. Copyright © 2013 Osteoarthritis Research Society International. All rights reserved.
Neural network diagnosis of avascular necrosis from magnetic resonance images
NASA Astrophysics Data System (ADS)
Manduca, Armando; Christy, Paul S.; Ehman, Richard L.
1993-09-01
We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.
JASMINE design and method of data reduction
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito
2008-07-01
Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.
Liu, Dan; Xie, Lixin; Zhao, Haiyan; Liu, Xueyao; Cao, Jie
2016-05-26
The early identification of patients at risk of dying from community-acquired pneumonia (CAP) is critical for their treatment and for defining hospital resource consumption. Mid-regional pro-adrenomedullin (MR-proADM) has been extensively investigated for its prognostic value in CAP. However, the results are conflicting. The purpose of the present meta-analysis was to explore the diagnostic accuracy of MR-proADM for predicting mortality in patients suffering from CAP, particularly emergency department (ED) patients. We systematically searched the PubMed, Embase, Web of Knowledge and Cochrane databases. Studies were included if a 2 × 2 contingency table could be constructed based on both the MR-proADM level and the complications or mortality of patients diagnosed with CAP. The prognostic accuracy of MR-proADM in CAP was assessed using the bivariate meta-analysis model. We used the Q-test and I (2) index to evaluate heterogeneity. MR-proADM displayed moderate diagnostic accuracy for predicting complications in CAP, with an overall area under the SROC curve (AUC) of 0.74 (95 % CI: 0.70-0.78). Eight studies with a total of 4119 patients in the emergency department (ED) were included. An elevated MR-proADM level was associated with increased risk of death from CAP (RR 6.16, 95 % CI 4.71-8.06); the I (2) value was 0.0 %, and a fixed-effects model was used to pool RR. The pooled sensitivity and specificity were 0.74 (95 % CI: 0.67-0.79) and 0.73 (95 % CI: 0.70-0.77), respectively. The positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were 2.8 (95 % CI, 2.3-3.3) and 0.36 (95 % CI, 0.29-0.45), respectively. In addition, the diagnostic odds ratio (DOR) was 8 (95 % CI, 5-11), and the overall area under the SROC curve was 0.76 (95 % CI, 0.72-0.80). Our study has demonstrated that MR-proADM is predictive of increased complications and higher mortality rates in patients suffering from CAP. Future studies are warranted to determine the prognostic accuracy of MR-proADM in conjunction with severity scores or other biomarkers and to determine an optimal cut-off level.
A design space exploration for control of Critical Quality Attributes of mAb.
Bhatia, Hemlata; Read, Erik; Agarabi, Cyrus; Brorson, Kurt; Lute, Scott; Yoon, Seongkyu
2016-10-15
A unique "design space (DSp) exploration strategy," defined as a function of four key scenarios, was successfully integrated and validated to enhance the DSp building exercise, by increasing the accuracy of analyses and interpretation of processed data. The four key scenarios, defining the strategy, were based on cumulative analyses of individual models developed for the Critical Quality Attributes (23 Glycan Profiles) considered for the study. The analyses of the CQA estimates and model performances were interpreted as (1) Inside Specification/Significant Model (2) Inside Specification/Non-significant Model (3) Outside Specification/Significant Model (4) Outside Specification/Non-significant Model. Each scenario was defined and illustrated through individual models of CQA aligning the description. The R(2), Q(2), Model Validity and Model Reproducibility estimates of G2, G2FaGbGN, G0 and G2FaG2, respectively, signified the four scenarios stated above. Through further optimizations, including the estimation of Edge of Failure and Set Point Analysis, wider and accurate DSps were created for each scenario, establishing critical functional relationship between Critical Process Parameters (CPPs) and Critical Quality Attributes (CQAs). A DSp provides the optimal region for systematic evaluation, mechanistic understanding and refining of a QbD approach. DSp exploration strategy will aid the critical process of consistently and reproducibly achieving predefined quality of a product throughout its lifecycle. Copyright © 2016 Elsevier B.V. All rights reserved.
Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions
Zhang, Jingji
2018-01-01
Abstract We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of ‘monitoring bases’ A1492, A1493 and G530 in 16S rRNA in favor of their ‘activated’ state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the ‘intrinsic selectivity’ of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome. PMID:29267976
The uncertainty of crop yield projections is reduced by improved temperature response functions
USDA-ARS?s Scientific Manuscript database
Increasing the accuracy of crop productivity estimates is a key Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on cr...
Does oxytocin lead to emotional interference during a working memory paradigm?
Tollenaar, Marieke S; Ruissen, M; Elzinga, B M; de Bruijn, E R A
2017-12-01
Oxytocin administration may increase attention to emotional information. We hypothesized that this augmented emotional processing might in turn lead to interference on concurrent cognitive tasks. To test this hypothesis, we examined whether oxytocin administration would lead to heightened emotional interference during a working memory paradigm. Additionally, moderating effects of childhood maltreatment were explored. Seventy-eight healthy males received 24 IU of intranasal oxytocin or placebo in a randomized placebo-controlled double-blind between-subjects study. A working memory task was performed during which neutral, positive, and negative distractors were presented. The main outcome observed was that oxytocin did not enhance interference by emotional information during the working memory task. There was a non-significant trend for oxytocin to slow down performance irrespective of distractor valence, while accuracy was unaffected. Exploratory analyses showed that childhood maltreatment was related to lower overall accuracy, but in the placebo condition only. However, the maltreated group sample size was very small precluding any conclusions on its moderating effect. Despite oxytocin's previously proposed role in enhanced emotional processing, no proof was found that this would lead to reduced performance on a concurrent cognitive task. The routes by which oxytocin exerts its effects on cognitive and social-emotional processes remain to be fully elucidated.
Quantifying sub-pixel urban impervious surface through fusion of optical and inSAR imagery
Yang, L.; Jiang, L.; Lin, H.; Liao, M.
2009-01-01
In this study, we explored the potential to improve urban impervious surface modeling and mapping with the synergistic use of optical and Interferometric Synthetic Aperture Radar (InSAR) imagery. We used a Classification and Regression Tree (CART)-based approach to test the feasibility and accuracy of quantifying Impervious Surface Percentage (ISP) using four spectral bands of SPOT 5 high-resolution geometric (HRG) imagery and three parameters derived from the European Remote Sensing (ERS)-2 Single Look Complex (SLC) SAR image pair. Validated by an independent ISP reference dataset derived from the 33 cm-resolution digital aerial photographs, results show that the addition of InSAR data reduced the ISP modeling error rate from 15.5% to 12.9% and increased the correlation coefficient from 0.71 to 0.77. Spatially, the improvement is especially noted in areas of vacant land and bare ground, which were incorrectly mapped as urban impervious surfaces when using the optical remote sensing data. In addition, the accuracy of ISP prediction using InSAR images alone is only marginally less than that obtained by using SPOT imagery. The finding indicates the potential of using InSAR data for frequent monitoring of urban settings located in cloud-prone areas.
NASA Astrophysics Data System (ADS)
Bonnema, M.; Hossain, F.
2016-12-01
The Mekong River Basin is undergoing rapid hydropower development. Nine dams are planned on the main stem of the Mekong and many more on its extensive tributaries. Understanding the effects that current and future dams have on the river system and water cycle as a whole is vital for the millions of people living in the basin. reservoir residence time, the amount of time water spends stored in a reservoir, is a key parameter in investigating these impacts. The forthcoming Surface Water and Ocean Topography (SWOT) mission is poised to provide an unprecedented amount of surface water observations. SWOT, when augmented by current satellite missions, will provide the necessary information to estimate the residence time of reservoirs across the entire basin in a more comprehensive way than ever before. In this study, we first combine observations from current satellite missions (altimetry, spectral imaging, precipitation) to estimate the residence times of existing reservoirs. We then use this information to project how future reservoirs will increase the residence time of the river system. Next, we explore how SWOT observations can be used to improve residence time estimation by examining the accuracy of reservoir surface area and elevation observations as well as the accuracy of river discharge observations.
NASA Astrophysics Data System (ADS)
van Oosterom, Matthias Nathanaël; Engelen, Myrthe Adriana; van den Berg, Nynke Sjoerdtje; KleinJan, Gijs Hendrik; van der Poel, Henk Gerrit; Wendler, Thomas; van de Velde, Cornelis Jan Hadde; Navab, Nassir; van Leeuwen, Fijs Willem Bernhard
2016-08-01
Robot-assisted laparoscopic surgery is becoming an established technique for prostatectomy and is increasingly being explored for other types of cancer. Linking intraoperative imaging techniques, such as fluorescence guidance, with the three-dimensional insights provided by preoperative imaging remains a challenge. Navigation technologies may provide a solution, especially when directly linked to both the robotic setup and the fluorescence laparoscope. We evaluated the feasibility of such a setup. Preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) or intraoperative freehand SPECT (fhSPECT) scans were used to navigate an optically tracked robot-integrated fluorescence laparoscope via an augmented reality overlay in the laparoscopic video feed. The navigation accuracy was evaluated in soft tissue phantoms, followed by studies in a human-like torso phantom. Navigation accuracies found for SPECT/CT-based navigation were 2.25 mm (coronal) and 2.08 mm (sagittal). For fhSPECT-based navigation, these were 1.92 mm (coronal) and 2.83 mm (sagittal). All errors remained below the <1-cm detection limit for fluorescence imaging, allowing refinement of the navigation process using fluorescence findings. The phantom experiments performed suggest that SPECT-based navigation of the robot-integrated fluorescence laparoscope is feasible and may aid fluorescence-guided surgery procedures.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.
Osborne, Nikola K P; Taylor, Michael C; Healey, Matthew; Zajac, Rachel
2016-03-01
It is becoming increasingly apparent that contextual information can exert a considerable influence on decisions about forensic evidence. Here, we explored accuracy and contextual influence in bloodstain pattern classification, and how these variables might relate to analyst characteristics. Thirty-nine bloodstain pattern analysts with varying degrees of experience each completed measures of compliance, decision-making style, and need for closure. Analysts then examined a bloodstain pattern without any additional contextual information, and allocated votes to listed pattern types according to favoured and less favoured classifications. Next, if they believed it would assist with their classification, analysts could request items of contextual information - from commonly encountered sources of information in bloodstain pattern analysis - and update their vote allocation. We calculated a shift score for each item of contextual information based on vote reallocation. Almost all forms of contextual information influenced decision-making, with medical findings leading to the highest shift scores. Although there was a small positive association between shift scores and the degree to which analysts displayed an intuitive decision-making style, shift scores did not vary meaningfully as a function of experience or the other characteristics measured. Almost all of the erroneous classifications were made by novice analysts. Copyright © 2016 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Detection of inter-frame forgeries in digital videos.
K, Sitara; Mehtre, B M
2018-05-26
Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.
Machine Learning for Big Data: A Study to Understand Limits at Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R.; Del-Castillo-Negrete, Carlos Emilio
This report aims to empirically understand the limits of machine learning when applied to Big Data. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical data mining and machine learning under more scrutiny, evaluation and application for gleaning insights from the data than ever before. Much is expected from algorithms without understanding their limitations at scale while dealing with massive datasets. In that context, we pose and address the following questions How does a machine learning algorithm perform on measuresmore » such as accuracy and execution time with increasing sample size and feature dimensionality? Does training with more samples guarantee better accuracy? How many features to compute for a given problem? Do more features guarantee better accuracy? Do efforts to derive and calculate more features and train on larger samples worth the effort? As problems become more complex and traditional binary classification algorithms are replaced with multi-task, multi-class categorization algorithms do parallel learners perform better? What happens to the accuracy of the learning algorithm when trained to categorize multiple classes within the same feature space? Towards finding answers to these questions, we describe the design of an empirical study and present the results. We conclude with the following observations (i) accuracy of the learning algorithm increases with increasing sample size but saturates at a point, beyond which more samples do not contribute to better accuracy/learning, (ii) the richness of the feature space dictates performance - both accuracy and training time, (iii) increased dimensionality often reflected in better performance (higher accuracy in spite of longer training times) but the improvements are not commensurate the efforts for feature computation and training and (iv) accuracy of the learning algorithms drop significantly with multi-class learners training on the same feature matrix and (v) learning algorithms perform well when categories in labeled data are independent (i.e., no relationship or hierarchy exists among categories).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christian, Mark H; Hadjerioua, Boualem; Lee, Kyutae
2015-01-01
The following paper represents the results of an investigation into the impact of the number and placement of Current Meter (CM) flow sensors on the accuracy to which they are capable of predicting the overall flow rate. Flow measurement accuracy is of particular importance in multiunit plants because it plays a pivotal role in determining the operational efficiency characteristics of each unit, allowing the operator to select the unit (or combination of units) which most efficiently meet demand. Several case studies have demonstrated that optimization of unit dispatch has the potential to increase plant efficiencies from between 1 to 4.4more » percent [2] [3]. Unfortunately current industry standards do not have an established methodology to measure the flow rate through hydropower units with short converging intakes (SCI); the only direction provided is that CM sensors should be used. The most common application of CM is horizontally, along a trolley which is incrementally lowered across a measurement cross section. As such, the measurement resolution is defined horizontally and vertically by the number of CM and the number of measurement increments respectively. There has not been any published research on the role of resolution in either direction on the accuracy of flow measurement. The work below investigates the effectiveness of flow measurement in a SCI by performing a case study in which point velocity measurements were extracted from a physical plant and then used to calculate a series of reference flow distributions. These distributions were then used to perform sensitivity studies on the relation between the number of CM and the accuracy to which the flow rate was predicted. The following research uncovered that a minimum of 795 plants contain SCI, a quantity which represents roughly 12% of total domestic hydropower capacity. In regards to measurement accuracy, it was determined that accuracy ceases to increase considerably due to strict increases in vertical resolution beyond the application of 49 transects. Moreover the research uncovered that the application of 5 CM (when applied at 49 vertical transects) resulted in an average accuracy of 95.6% and the application of additional sensors resulted in a linear increase in accuracy up to 17 CM which had an average accuracy of 98.5%. Beyond 17 CM incremental increases in accuracy due to the addition of CM was found decrease exponentially. Future work that will be performed in this area will investigate the use of computational fluid dynamics to acquire a broader range of flow fields within SCI.« less
Faes, Jolien; Gillis, Joris; Gillis, Steven
2016-01-01
Phonemic accuracy of children with cochlear implants (CI) is often reported to be lower in comparison with normally hearing (NH) age-matched children. In this study, we compare phonemic accuracy development in the spontaneous speech of Dutch-speaking children with CI and NH age-matched peers. A dynamic cost model of Levenshtein distance is used to compute the accuracy of each word token. We set up a longitudinal design with monthly data for comparisons up to age two and a cross-sectional design with yearly data between three and five years of age. The main finding is that phonemic accuracy steadily increases throughout the period studied. Children with CI's accuracy is lower than that of their NH age mates, but this difference is not statistically significant in the earliest stages of lexical development. But accuracy of children with CI initially improves significantly less steeply than that of NH peers. Furthermore, the number of syllables in the target word and target word's complexity influence children's accuracy, as longer and more complex target words are less accurately produced. Up to age four, children with CI are significantly less accurate than NH children with increasing word length and word complexity. This difference has disappeared at age five. Finally, hearing age is shown to influence accuracy development of children with CI, while age of implant activation is not. This article informs the reader about phonemic accuracy development in children. The reader will be able to (a) discuss different metrics to measure phonemic accuracy development, (b) discuss phonemic accuracy of children with CI up to five years of age and compare them with NH children, (c) discuss the influence of target word's complexity and target word's syllable length on phonemic accuracy, (d) discuss the influence of hearing experience and age of implantation on phonemic accuracy of children with CI. Copyright © 2015 Elsevier Inc. All rights reserved.
The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.
Debry, Ronald W
2005-06-01
The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.
Compassion meditation enhances empathic accuracy and related neural activity
Mascaro, Jennifer S.; Rilling, James K.; Tenzin Negi, Lobsang; Raison, Charles L.
2013-01-01
The ability to accurately infer others’ mental states from facial expressions is important for optimal social functioning and is fundamentally impaired in social cognitive disorders such as autism. While pharmacologic interventions have shown promise for enhancing empathic accuracy, little is known about the effects of behavioral interventions on empathic accuracy and related brain activity. This study employed a randomized, controlled and longitudinal design to investigate the effect of a secularized analytical compassion meditation program, cognitive-based compassion training (CBCT), on empathic accuracy. Twenty-one healthy participants received functional MRI scans while completing an empathic accuracy task, the Reading the Mind in the Eyes Test (RMET), both prior to and after completion of either CBCT or a health discussion control group. Upon completion of the study interventions, participants randomized to CBCT and were significantly more likely than control subjects to have increased scores on the RMET and increased neural activity in the inferior frontal gyrus (IFG) and dorsomedial prefrontal cortex (dmPFC). Moreover, changes in dmPFC and IFG activity from baseline to the post-intervention assessment were associated with changes in empathic accuracy. These findings suggest that CBCT may hold promise as a behavioral intervention for enhancing empathic accuracy and the neurobiology supporting it. PMID:22956676
Estis, Julie M; Dean-Claytor, Ashli; Moore, Robert E; Rowell, Thomas L
2011-03-01
The effects of musical interference and noise on pitch-matching accuracy were examined. Vocal training was explored as a factor influencing pitch-matching accuracy, and the relationship between pitch matching and pitch discrimination was examined. Twenty trained singers (TS) and 20 untrained individuals (UT) vocally matched tones in six conditions (immediate, four types of chords, noise). Fundamental frequencies were calculated, compared with the frequency of the target tone, and converted to semitone difference scores. A pitch discrimination task was also completed. TS showed significantly better pitch matching than UT across all conditions. Individual performances for UT were highly variable. Therefore, untrained participants were divided into two groups: 10 untrained accurate and 10 untrained inaccurate. Comparison of TS with untrained accurate individuals revealed significant differences between groups and across conditions. Compared with immediate vocal matching of target tones, pitch-matching accuracy was significantly reduced, given musical chord and noise interference unless the target tone was presented in the musical chord. A direct relationship between pitch matching and pitch discrimination was revealed. Across pitch-matching conditions, TS were consistently more accurate than UT. Pitch-matching accuracy diminished when auditory interference consisted of chords that did not contain the target tone and noise. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei
2011-12-01
Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.
Newman, Rochelle S; Bernstein Ratner, Nan
2007-02-01
The purpose of this study was to investigate whether lexical access in adults who stutter (AWS) differs from that in people who do not stutter. Specifically, the authors examined the role of 3 lexical factors on naming speed, accuracy, and fluency: word frequency, neighborhood density, and neighborhood frequency. If stuttering results from an impairment in lexical access, these factors were hypothesized to differentially affect AWS performance on a confrontation naming task. Twenty-five AWS and 25 normally fluent comparison speakers, matched for age and education, participated in a confrontation naming task designed to explore within-speaker performance on naming accuracy, speed, and fluency based on stimulus word frequency and neighborhood characteristics. Accuracy, fluency, and reaction time (from acoustic waveform analysis) were computed. In general, AWS demonstrated the same effects of lexical factors on their naming as did adults who do not stutter. However, accuracy of naming was reduced for AWS. Stuttering rate was influenced by word frequency but not other factors. Results suggest that AWS could have a fundamental deficit in lexical retrieval, but this deficit is unlikely to be at the level of the word's abstract phonological representation. Implications for further research are discussed.
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
Erby, Lori A H; Roter, Debra L; Biesecker, Barbara B
2011-11-01
To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. Copyright © 2010. Published by Elsevier Ireland Ltd.
Lohmann, Philipp; Stoffels, Gabriele; Ceccon, Garry; Rapp, Marion; Sabel, Michael; Filss, Christian P; Kamp, Marcel A; Stegmayr, Carina; Neumaier, Bernd; Shah, Nadim J; Langen, Karl-Josef; Galldiks, Norbert
2017-07-01
We investigated the potential of textural feature analysis of O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic 18 F-FET PET. Tumour-to-brain ratios (TBRs) of 18 F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR mean alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR max alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR max . Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic 18 F-FET PET scans. • Textural feature analysis provides quantitative information about tumour heterogeneity • Textural features help improve discrimination between brain metastasis recurrence and radiation injury • Textural features might be helpful to further understand tumour heterogeneity • Analysis does not require a more time consuming dynamic PET acquisition.
Final Technical Report: Increasing Prediction Accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Bruce Hardison; Hansen, Clifford; Stein, Joshua
2015-12-01
PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.
The neutral mass spectrometer on Dynamics Explorer B
NASA Technical Reports Server (NTRS)
Carignan, G. R.; Block, B. P.; Maurer, J. C.; Hedin, A. E.; Reber, C. A.; Spencer, N. W.
1981-01-01
A neutral gas mass spectrometer has been developed to satisfy the measurement requirements of the Dynamics Explorer mission. The mass spectrometer, a quadrupole, will measure the abundances of neutral species in the region 300-500 km in the earth's atmosphere. These measurements will be used in concert with other simultaneous observations on Dynamics Explorer to study the physical processes involved in the interactions of the magnetosphere-ionosphere-atmosphere system. The instrument, which is similar to that flown on Atmosphere Explorer, employs an electron beam ion source operating in the closed mode and a discrete dynode multiplier as a detector. The mass range is 22 to 50 amu. The abundances of atomic oxygen, molecular nitrogen, helium, argon, and possibly atomic nitrogen will be measured to an accuracy of about + or - 15% over the specified altitude range, with a temporal resolution of one second.
Eye Gaze and Aging: Selective and Combined Effects of Working Memory and Inhibitory Control.
Crawford, Trevor J; Smith, Eleanor S; Berry, Donna M
2017-01-01
Eye-tracking is increasingly studied as a cognitive and biological marker for the early signs of neuropsychological and psychiatric disorders. However, in order to make further progress, a more comprehensive understanding of the age-related effects on eye-tracking is essential. The antisaccade task requires participants to make saccadic eye movements away from a prepotent stimulus. Speculation on the cause of the observed age-related differences in the antisaccade task largely centers around two sources of cognitive dysfunction: inhibitory control (IC) and working memory (WM). The IC account views cognitive slowing and task errors as a direct result of the decline of inhibitory cognitive mechanisms. An alternative theory considers that a deterioration of WM is the cause of these age-related effects on behavior. The current study assessed IC and WM processes underpinning saccadic eye movements in young and older participants. This was achieved with three experimental conditions that systematically varied the extent to which WM and IC were taxed in the antisaccade task: a memory-guided task was used to explore the effect of increasing the WM load; a Go/No-Go task was used to explore the effect of increasing the inhibitory load; a 'standard' antisaccade task retained the standard WM and inhibitory loads. Saccadic eye movements were also examined in a control condition: the standard prosaccade task where the load of WM and IC were minimal or absent. Saccade latencies, error rates and the spatial accuracy of saccades of older participants were compared to the same measures in healthy young controls across the conditions. The results revealed that aging is associated with changes in both IC and WM. Increasing the inhibitory load was associated with increased reaction times in the older group, while the increased WM load and the inhibitory load contributed to an increase in the antisaccade errors. These results reveal that aging is associated with changes in both IC and WM.
The effect of letter string length and report condition on letter recognition accuracy.
Raghunandan, Avesh; Karmazinaite, Berta; Rossow, Andrea S
Letter sequence recognition accuracy has been postulated to be limited primarily by low-level visual factors. The influence of high level factors such as visual memory (load and decay) has been largely overlooked. This study provides insight into the role of these factors by investigating the interaction between letter sequence recognition accuracy, letter string length and report condition. Letter sequence recognition accuracy for trigrams and pentagrams were measured in 10 adult subjects for two report conditions. In the complete report condition subjects reported all 3 or all 5 letters comprising trigrams and pentagrams, respectively. In the partial report condition, subjects reported only a single letter in the trigram or pentagram. Letters were presented for 100ms and rendered in high contrast, using black lowercase Courier font that subtended 0.4° at the fixation distance of 0.57m. Letter sequence recognition accuracy was consistently higher for trigrams compared to pentagrams especially for letter positions away from fixation. While partial report increased recognition accuracy in both string length conditions, the effect was larger for pentagrams, and most evident for the final letter positions within trigrams and pentagrams. The effect of partial report on recognition accuracy for the final letter positions increased as eccentricity increased away from fixation, and was independent of the inner/outer position of a letter. Higher-level visual memory functions (memory load and decay) play a role in letter sequence recognition accuracy. There is also suggestion of additional delays imposed on memory encoding by crowded letter elements. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
2014-11-01
VA HEALTH CARE Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing...00-2014 4. TITLE AND SUBTITLE VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing...Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data Why GAO Did This Study In 2013, VA estimated that about 1.5 million
Federico, Angela; Trentin, Michela; Zanette, Giampietro; Mapelli, Daniela; Picelli, Alessandro; Smania, Nicola; Tinazzi, Michele; Tamburin, Stefano
2017-08-01
Mild cognitive impairment (MCI) is common in patients with Parkinson's disease (PD) and should be recognized early because it represents a predictor of PD-related dementia and worse disease course. Diagnostic criteria for PD-related MCI (PD-MCI) have recently been defined by a Movement Disorders Society (MDS) task force. The present study explored which neuropsychological tests perform best for a level II (i.e., comprehensive neuropsychological assessment) diagnosis of PD-MCI according to the MDS task force criteria in Italian-speaking PD patients. To this aim, we assessed a comprehensive 23-item neuropsychological battery, derived the best-performing 10-test battery (i.e., two tests per domain for each of the five cognitive domains), and explored its accuracy for diagnosing PD-MCI in comparison to the full battery in a group of PD patients. A secondary aim was to explore the role of this battery for subtyping PD-MCI according to single-domain vs. multiple-domain involvement. The 10-test battery showed 73% sensitivity and 100% specificity for diagnosing PD-MCI, and 69% sensitivity and 100% specificity for PD-MCI subtyping. In patients older than 70 years, we derived a slightly different 10-test battery with 84% sensitivity and 100% specificity for PD-MCI diagnosis, and 86% sensitivity and 100% specificity for PD-MCI subtyping. These 10-item neuropsychological batteries might represent a good trade-off between diagnostic accuracy and time of application, and their role in PD-MCI diagnosis and subtyping should be further explored in future prospective studies.
Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures
ERIC Educational Resources Information Center
Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John
2013-01-01
The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…
Enhancing Frequency Recording by Developmental Disabilities Treatment Staff
ERIC Educational Resources Information Center
Mozingo, Dennis B.; Smith, Tristram; Riordan, Mary R.; Reiss, Maxin L.; Bailey, Jon S.
2006-01-01
We evaluated a staff training and management package for increasing accuracy of recording frequency of problem behavior in a residential care facility. A multiple baseline design across the first and second work shifts showed that 2 of 8 participants increased their accuracy following in-service training, and all 8 improved during a condition with…
Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task
ERIC Educational Resources Information Center
Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott
2016-01-01
The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…
Using Remotely Sensed Information for Near Real-Time Landslide Hazard Assessment
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Peters-Lidard, Christa
2013-01-01
The increasing availability of remotely sensed precipitation and surface products provides a unique opportunity to explore how landslide susceptibility and hazard assessment may be approached at larger spatial scales with higher resolution remote sensing products. A prototype global landslide hazard assessment framework has been developed to evaluate how landslide susceptibility and satellite-derived precipitation estimates can be used to identify potential landslide conditions in near-real time. Preliminary analysis of this algorithm suggests that forecasting errors are geographically variable due to the resolution and accuracy of the current susceptibility map and the application of satellite-based rainfall estimates. This research is currently working to improve the algorithm through considering higher spatial and temporal resolution landslide susceptibility information and testing different rainfall triggering thresholds, antecedent rainfall scenarios, and various surface products at regional and global scales.
Illustrated review of new imaging techniques in the diagnosis of abdominal wall hernias.
Toms, A P; Dixon, A K; Murphy, J M; Jamieson, N V
1999-10-01
The assessment of abdominal wall hernias has long been a clinical skill that only occasionally required the supplementary radiological assistance of herniography. However, with the advent of cross-sectional imaging, a new range of diagnostic tools is now available to help the clinician in difficult cases. This review explores the ability of computed tomography and magnetic resonance imaging to demonstrate many of the hernias encountered in the anterior abdominal wall. Also discussed is the role of imaging techniques in the management of a variety of hernias. Cross-sectional imaging techniques are being employed with increasing frequency for the assessment of hernias. Although the anatomical detail can usually be delineated clearly, the accuracy of the various methods and their place in the clinical management of hernias has yet to be fully determined.
COSMO-SkyMed Spotlight interometry over rural areas: the Slumgullion landslide in Colorado, USA
Milillo, Pietro; Fielding, Eric J.; Schulz, William H.; Delbridge, Brent; Burgmann, Roland
2014-01-01
In the last 7 years, spaceborne synthetic aperture radar (SAR) data with resolution of better than a meter acquired by satellites in spotlight mode offered an unprecedented improvement in SAR interferometry (InSAR). Most attention has been focused on monitoring urban areas and man-made infrastructure exploiting geometric accuracy, stability, and phase fidelity of the spotlight mode. In this paper, we explore the potential application of the COSMO-SkyMed® Spotlight mode to rural areas where decorrelation is substantial and rapidly increases with time. We focus on the rapid repeat times of as short as one day possible with the COSMO-SkyMed® constellation. We further present a qualitative analysis of spotlight interferometry over the Slumgullion landslide in southwest Colorado, which moves at rates of more than 1 cm/day.
NASA Astrophysics Data System (ADS)
Nadeem, S.; Mehmood, Rashid; Akbar, Noreen Sher
2015-03-01
This study explores the collective effects of partial slip and transverse magnetic field on an oblique stagnation point flow of a rheological fluid. The prevailing momentum equations are designed by manipulating Casson fluid model. By applying the suitable similarity transformations, the governing system of equations is being transformed into coupled nonlinear ordinary differential equations. The resulting system is handled numerically through midpoint integration scheme together with Richardson's extrapolation. It is found that both normal and tangential velocity profiles decreases with an increase in magnetic field as well as slip parameter. Streamlines pattern are presented to study the actual impact of slip mechanism and magnetic field on the oblique flow. A suitable comparison with the previous literature is also provided to confirm the accuracy of present results for the limiting case.
Cross-Coupled Control for All-Terrain Rovers
Reina, Giulio
2013-01-01
Mobile robots are increasingly being used in challenging outdoor environments for applications that include construction, mining, agriculture, military and planetary exploration. In order to accomplish the planned task, it is critical that the motion control system ensure accuracy and robustness. The achievement of high performance on rough terrain is tightly connected with the minimization of vehicle-terrain dynamics effects such as slipping and skidding. This paper presents a cross-coupled controller for a 4-wheel-drive/4-wheel-steer robot, which optimizes the wheel motors' control algorithm to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. Experimental results, obtained with an all-terrain rover operating on agricultural terrain, are presented to validate the system. It is shown that the proposed approach is effective in reducing slippage and vehicle posture errors. PMID:23299625
Automated assessment of medical training evaluation text.
Zhang, Rui; Pakhomov, Serguei; Gladding, Sophia; Aylward, Michael; Borman-Shoap, Emily; Melton, Genevieve B
2012-01-01
Medical post-graduate residency training and medical student training increasingly utilize electronic systems to evaluate trainee performance based on defined training competencies with quantitative and qualitative data, the later of which typically consists of text comments. Medical education is concomitantly becoming a growing area of clinical research. While electronic systems have proliferated in number, little work has been done to help manage and analyze qualitative data from these evaluations. We explored the use of text-mining techniques to assist medical education researchers in sentiment analysis and topic analysis of residency evaluations with a sample of 812 evaluation statements. While comments were predominantly positive, sentiment analysis improved the ability to discriminate statements with 93% accuracy. Similar to other domains, Latent Dirichlet Analysis and Information Gain revealed groups of core subjects and appear to be useful for identifying topics from this data.
Influence of outliers on accuracy estimation in genomic prediction in plant breeding.
Estaghvirou, Sidi Boubacar Ould; Ogutu, Joseph O; Piepho, Hans-Peter
2014-10-01
Outliers often pose problems in analyses of data in plant breeding, but their influence on the performance of methods for estimating predictive accuracy in genomic prediction studies has not yet been evaluated. Here, we evaluate the influence of outliers on the performance of methods for accuracy estimation in genomic prediction studies using simulation. We simulated 1000 datasets for each of 10 scenarios to evaluate the influence of outliers on the performance of seven methods for estimating accuracy. These scenarios are defined by the number of genotypes, marker effect variance, and magnitude of outliers. To mimic outliers, we added to one observation in each simulated dataset, in turn, 5-, 8-, and 10-times the error SD used to simulate small and large phenotypic datasets. The effect of outliers on accuracy estimation was evaluated by comparing deviations in the estimated and true accuracies for datasets with and without outliers. Outliers adversely influenced accuracy estimation, more so at small values of genetic variance or number of genotypes. A method for estimating heritability and predictive accuracy in plant breeding and another used to estimate accuracy in animal breeding were the most accurate and resistant to outliers across all scenarios and are therefore preferable for accuracy estimation in genomic prediction studies. The performances of the other five methods that use cross-validation were less consistent and varied widely across scenarios. The computing time for the methods increased as the size of outliers and sample size increased and the genetic variance decreased. Copyright © 2014 Ould Estaghvirou et al.
Increased distance of shooting on basketball jump shot.
Okazaki, Victor Hugo Alves; Rodacki, André Luiz Félix
2012-01-01
The present study analyzed the effect of increased distance on basketball jump shot outcome and performance. Ten male expert basketball players were filmed and a number of kinematic variables analyzed during jump shot that were performed from three conditions to represent close, intermediate and far distances (2.8, 4.6, and 6.4m, respectively). Shot accuracy decreased from 59% (close) to 37% (far), in function of the task constraints (p < 0.05). Ball release height decreased (p < 0.05) from 2.46 m (close) to 2.38m (intermediate) and to 2.33m (long). Release angle also decreased (p < 0.05) when shot was performed from close (78.92°) in comparison to intermediate distances (65.60°). While, ball release velocity increased (p < 0.05) from 4.39 m/s (close) to 5.75 m·s(-1) (intermediate) to 6.89 m·s(-1) (far). These changes in ball release height, angle and velocity, related to movement performance adaptations were suggested as the main factors that influence jump shot accuracy when distance is augmented. Key pointsThe increased distance leads to greater spatial con-straint over shot movement that demands an adapta-tion of the movement for the regulation of the accu-racy and the impulse generation to release the ball.The reduction in balls release height and release angle, in addition to the increase in balls release ve-locity, were suggested as the main factors that de-creased shot accuracy with the distance increased.Players should look for release angles of shooting that provide an optimal balls release velocity to im-prove accuracy.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
Learning, retention, and generalization of haptic categories
NASA Astrophysics Data System (ADS)
Do, Phuong T.
This dissertation explored how haptic concepts are learned, retained, and generalized to the same or different modality. Participants learned to classify objects into three categories either visually or haptically via different training procedures, followed by an immediate or delayed transfer test. Experiment I involved visual versus haptic learning and transfer. Intermodal matching between vision and haptics was investigated in Experiment II. Experiments III and IV examined intersensory conflict in within- and between-category bimodal situations to determine the degree of perceptual dominance between sight and touch. Experiment V explored the intramodal relationship between similarity and categorization in a psychological space, as revealed by MDS analysis of similarity judgments. Major findings were: (1) visual examination resulted in relatively higher performance accuracy than haptic learning; (2) systematic training produced better category learning of haptic concepts across all modality conditions; (3) the category prototypes were rated newer than any transfer stimulus followed learning both immediately and after a week delay; and, (4) although they converged at the apex of two transformational trajectories, the category prototypes became more central to their respective categories and increasingly structured as a function of learning. Implications for theories of multimodal similarity and categorization behavior are discussed in terms of discrimination learning, sensory integration, and dominance relation.
Sami, Sarmed S.; Ragunath, Krish; Iyer, Prasad G.
2014-01-01
As the incidence and mortality of esophageal adenocarcinoma continue to increase, strategies to counter this need to be explored. Screening for Barrett’s esophagus, which is the known precursor of a large majority of adenocarcinomas, has been debated without a firm consensus. Given evidence for and against perceived benefits of screening, the multitude of challenges in the implementation of such a strategy and in the downstream management of subjects with Barrett’s esophagus who could be diagnosed by screening, support for screening has been modest. Recent advances in form of development and initial accuracy of non-invasive tools for screening, risk assessment tools and biomarker panels to risk stratify subjects with BE, have spurred renewed interest in the early detection of Barrett’s esophagus and related neoplasia, particularly with the advent of effective endoscopic therapy. In this review, we explore in depth, the potential rationale for screening for Barrett’s esophagus, recent advances which have the potential of making screening feasible and also highlight some of the challenges which will have to be overcome to develop an effective approach to improve the outcomes of subjects with esophageal adenocarcinoma. PMID:24887058
Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models
NASA Astrophysics Data System (ADS)
Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.
2017-12-01
Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.
Forecasting biodiversity in breeding birds using best practices
Taylor, Shawn D.; White, Ethan P.
2018-01-01
Biodiversity forecasts are important for conservation, management, and evaluating how well current models characterize natural systems. While the number of forecasts for biodiversity is increasing, there is little information available on how well these forecasts work. Most biodiversity forecasts are not evaluated to determine how well they predict future diversity, fail to account for uncertainty, and do not use time-series data that captures the actual dynamics being studied. We addressed these limitations by using best practices to explore our ability to forecast the species richness of breeding birds in North America. We used hindcasting to evaluate six different modeling approaches for predicting richness. Hindcasts for each method were evaluated annually for a decade at 1,237 sites distributed throughout the continental United States. All models explained more than 50% of the variance in richness, but none of them consistently outperformed a baseline model that predicted constant richness at each site. The best practices implemented in this study directly influenced the forecasts and evaluations. Stacked species distribution models and “naive” forecasts produced poor estimates of uncertainty and accounting for this resulted in these models dropping in the relative performance compared to other models. Accounting for observer effects improved model performance overall, but also changed the rank ordering of models because it did not improve the accuracy of the “naive” model. Considering the forecast horizon revealed that the prediction accuracy decreased across all models as the time horizon of the forecast increased. To facilitate the rapid improvement of biodiversity forecasts, we emphasize the value of specific best practices in making forecasts and evaluating forecasting methods. PMID:29441230
Bianco, Federica; Lecce, Serena; Banerjee, Robin
2016-09-01
Despite 30years of productive research on theory of mind (ToM), we still know relatively little about variables that influence ToM development during middle childhood. Recent experimental studies have shown that conversations about the mind affect ToM abilities, but they have not explored the mechanisms underlying this developmental effect. In the current study, we examined two potential mechanisms through which conversations about mental states are likely to influence ToM: an increased frequency of references to mental states when explaining behavior and an increased accuracy of mental-state attributions. To this aim, we conducted a training study in which 101 children were assigned to either an intervention condition or a control condition. The conversation-based intervention was made up of four sessions scheduled over 2weeks. Children completed a battery of assessments before and after the intervention as well as 2months later. The groups were equivalent at Time 1 (T1) for age, family affluence, vocabulary, and executive functions. The ToM group showed an improvement in ToM skills (as evaluated on both the practiced tasks and a transfer task). Mediation analyses demonstrated that the accuracy of mental-state attributions, but not the mere frequency of mental-state references, mediated the positive effect of conversations about the mind on ToM development. Our results indicate that conversational experience can enhance mental-state reasoning not by simply drawing children's attention to mental states but rather by scaffolding a mature understanding of social situations. Copyright © 2015 Elsevier Inc. All rights reserved.
Predictors of Time-Based Prospective Memory in Children
ERIC Educational Resources Information Center
Mackinlay, Rachael J.; Kliegel, Matthias; Mantyla, Timo
2009-01-01
This study identified age differences in time-based prospective memory performance in school-aged children and explored possible cognitive correlates of age-related performance. A total of 56 7- to 12-year-olds performed a prospective memory task in which prospective memory accuracy, ongoing task performance, and time monitoring were assessed.…
Entrepreneurship Education Evaluation: Revisiting Storey to Hunt for the "Heffalump"
ERIC Educational Resources Information Center
Henry, Colette
2015-01-01
Purpose: The purpose of this paper is to consider entrepreneurship education (EE) evaluation. Specifically, it explores some of the challenges involved in applying the "HEInnovate" tool, and considers ways in which its accuracy and value might be strengthened. Using Storey (2000) by way of reflective critique, the paper proposes an…
ERIC Educational Resources Information Center
Grainger, Catherine; Williams, David M.; Lind, Sophie E.
2016-01-01
This study explored whether adults and adolescents with autism spectrum disorder (ASD) demonstrate difficulties making metacognitive judgments, specifically judgments of learning. Across two experiments, the study examined whether individuals with ASD could accurately judge whether they had learnt a piece of information (in this case word pairs).…
Design-Based Guidelines for the Semantic Perception of Emergency Signs
ERIC Educational Resources Information Center
Chang, Chin-Wei; Hsiao, Hung-Yi; Tang, Chieh-Hsin; Chuang, Ying-Ji; Lin, Ching-Yuan
2010-01-01
The current study applies semantic differential to explore the semantic perception of emergency signs, in an attempt to analyze the meanings of emergency signs in regard to the psychological exigencies of the general public. The results indicate that problems concerning recognition accuracy have been observed, but also that the evaluation of the…
ERIC Educational Resources Information Center
Nichols, Kim; Ranasinghe, Muditha; Hanan, Jim
2013-01-01
Interacting with and translating across multiple representations is an essential characteristic of expertise and representational fluency. In this study, we explored the effect of interacting with and translating between representations in a computer simulation or in a paper-based assignment on scientific accuracy of undergraduate science…
Teachers as Awakeners: A Collaborative Approach in Language Learning and Social Media
ERIC Educational Resources Information Center
Plutino, Alessia
2017-01-01
This paper provides an overview of the successful pedagogical project TwitTIAMO, now in its third year, where micro blogging (Twitter) has been used in Italian language teaching and learning to improve students' communicative language skills, accuracy, fluency and pronunciation outside timetabled lessons. It also explores the background and…
ERIC Educational Resources Information Center
Klein, Harriet B.; Grigos, Maria I.; Byun, Tara McAllister; Davidson, Lisa
2012-01-01
This study examined inexperienced listeners' perceptions of children's naturally produced /r/ sounds with reference to levels of accuracy determined by consensus between two expert clinicians. Participants rated /r/ sounds as fully correct, distorted or incorrect/non-rhotic. Second and third formant heights were measured to explore the…
Exploring Proficiency-Based vs. Performance-Based Items with Elicited Imitation Assessment
ERIC Educational Resources Information Center
Cox, Troy L.; Bown, Jennifer; Burdis, Jacob
2015-01-01
This study investigates the effect of proficiency- vs. performance-based elicited imitation (EI) assessment. EI requires test-takers to repeat sentences in the target language. The accuracy at which test-takers are able to repeat sentences highly correlates with test-takers' language proficiency. However, in EI, the factors that render an item…
The accuracy of direct and indirect resource use and emissions of products as quantified in life cycle models depends in part upon the geographical and technological representativeness of the production models. Production conditions vary not just between nations, but also within ...
Key Skills for Science Learning: The Importance of Text Cohesion and Reading Ability
ERIC Educational Resources Information Center
Hall, Sophie Susannah; Maltby, John; Filik, Ruth; Paterson, Kevin B.
2016-01-01
To explore the importance of text cohesion, we conducted two experiments. We measured online (reading times) and offline (comprehension accuracy) processes for texts that were high and low cohesion. In study one (n?=?60), we manipulated referential cohesion using noun repetition (high cohesion) and synonymy (low cohesion). Students showed enhanced…
ERIC Educational Resources Information Center
Kroopnick, Marc Howard
2010-01-01
When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…
USDA-ARS?s Scientific Manuscript database
BACKGROUND: Next-generation sequencing projects commonly commence by aligning reads to a reference genome assembly. While improvements in alignment algorithms and computational hardware have greatly enhanced the efficiency and accuracy of alignments, a significant percentage of reads often remain u...
Studying Language Learning Opportunities Afforded by a Collaborative CALL Task
ERIC Educational Resources Information Center
Leahy, Christine
2016-01-01
This research study explores the learning potential of a computer-assisted language learning (CALL) activity. Research suggests that the dual emphasis on content development and language accuracy, as well as the complexity of L2 production in natural settings, can potentially create cognitive overload. This study poses the question whether, and…
Planning and Second Language Development in Task-Based Synchronous Computer-Mediated Communication
ERIC Educational Resources Information Center
Hsu, Hsiu-Chen
2012-01-01
This dissertation explored the effect of two planning conditions (the multiple planning condition with rehearsal and online planning time, and the single planning condition with online planning time only) on L2 production complexity and accuracy and the subsequent development of these two linguistic areas in the context of written synchronous…
ERIC Educational Resources Information Center
Ho, Jeannette; Crowley, Gwyneth H.
2003-01-01
Explored user perceptions of dependability and accuracy of Texas A&M library services through focus groups. Reports user difficulties in locating materials, inaccurate catalog and circulation records, inadequate signage, searching the online catalog, and late notification of interlibrary loan arrivals; and discusses the library's efforts to…
The Persistence of "Solid" and "Liquid" Naive Conceptions: A Reaction Time Study
ERIC Educational Resources Information Center
Babai, Reuven; Amsterdamer, Anat
2008-01-01
The study explores whether the naive concepts of "solid" and "liquid" persist in adolescence. Accuracy of responses and reaction times where measured while 41 ninth graders classified different solids (rigid, non-rigid and powders) and different liquids (runny, dense) into solid or liquid. The results show that these naive conceptions affect…
ERIC Educational Resources Information Center
Schiff, Rachel
2012-01-01
The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts,…
Speed Isn't Everything: A Study of Examination Marking
ERIC Educational Resources Information Center
Nadas, Rita; Suto, Irenka
2010-01-01
The question of whether marking speed is related to marking accuracy is important for training examiners and planning realistic marking schedules. We explored marking speed in the context of a past examination for an international biology qualification for 14- to 16-year-olds. Forty-two markers with differing backgrounds experimentally marked 23…
Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.
ERIC Educational Resources Information Center
Biederman, Irving; Cooper, Eric E.
1991-01-01
Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-07-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Basic functional trade-offs in cognition: An integrative framework.
Del Giudice, Marco; Crespi, Bernard J
2018-06-14
Trade-offs between advantageous but conflicting properties (e.g., speed vs. accuracy) are ubiquitous in cognition, but the relevant literature is conceptually fragmented, scattered across disciplines, and has not been organized in a coherent framework. This paper takes an initial step toward a general theory of cognitive trade-offs by examining four key properties of goal-directed systems: performance, efficiency, robustness, and flexibility. These properties define a number of basic functional trade-offs that can be used to map the abstract "design space" of natural and artificial cognitive systems. Basic functional trade-offs provide a shared vocabulary to describe a variety of specific trade-offs including speed vs. accuracy, generalist vs. specialist, exploration vs. exploitation, and many others. By linking specific features of cognitive functioning to general properties such as robustness and efficiency, it becomes possible to harness some powerful insights from systems engineering and systems biology to suggest useful generalizations, point to under-explored but potentially important trade-offs, and prompt novel hypotheses and connections between disparate areas of research. Copyright © 2018 Elsevier B.V. All rights reserved.
Livingstone, Steven R.; Choi, Deanna H.; Russo, Frank A.
2014-01-01
Vocal training through singing and acting lessons is known to modify acoustic parameters of the voice. While the effects of singing training have been well documented, the role of acting experience on the singing voice remains unclear. In two experiments, we used linear mixed models to examine the relationships between the relative amounts of acting and singing experience on the acoustics and perception of the male singing voice. In Experiment 1, 12 male vocalists were recorded while singing with five different emotions, each with two intensities. Acoustic measures of pitch accuracy, jitter, and harmonics-to-noise ratio (HNR) were examined. Decreased pitch accuracy and increased jitter, indicative of a lower “voice quality,” were associated with more years of acting experience, while increased pitch accuracy was associated with more years of singing lessons. We hypothesized that the acoustic deviations exhibited by more experienced actors was an intentional technique to increase the genuineness or truthfulness of their emotional expressions. In Experiment 2, listeners rated vocalists’ emotional genuineness. Vocalists with more years of acting experience were rated as more genuine than vocalists with less acting experience. No relationship was reported for singing training. Increased genuineness was associated with decreased pitch accuracy, increased jitter, and a higher HNR. These effects may represent a shifting of priorities by male vocalists with acting experience to emphasize emotional genuineness over pitch accuracy or voice quality in their singing performances. PMID:24639659
Effects of cognitive training on change in accuracy in inductive reasoning ability.
Boron, Julie Blaskewicz; Turiano, Nicholas A; Willis, Sherry L; Schaie, K Warner
2007-05-01
We investigated cognitive training effects on accuracy and number of items attempted in inductive reasoning performance in a sample of 335 older participants (M = 72.78 years) from the Seattle Longitudinal Study. We assessed the impact of individual characteristics, including chronic disease. The reasoning training group showed significantly greater gain in accuracy and number of attempted items than did the comparison group; gain was primarily due to enhanced accuracy. Reasoning training effects involved a complex interaction of gender, prior cognitive status, and chronic disease. Women with prior decline on reasoning but no heart disease showed the greatest accuracy increase. In addition, stable reasoning-trained women with heart disease demonstrated significant accuracy gain. Comorbidity was associated with less change in accuracy. The results support the effectiveness of cognitive training on improving the accuracy of reasoning performance.
Optimizing Tsunami Forecast Model Accuracy
NASA Astrophysics Data System (ADS)
Whitmore, P.; Nyland, D. L.; Huang, P. Y.
2015-12-01
Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.
Haberland, A M; König von Borstel, U; Simianer, H; König, S
2012-09-01
Reliable selection criteria are required for young riding horses to increase genetic gain by increasing accuracy of selection and decreasing generation intervals. In this study, selection strategies incorporating genomic breeding values (GEBVs) were evaluated. Relevant stages of selection in sport horse breeding programs were analyzed by applying selection index theory. Results in terms of accuracies of indices (r(TI) ) and relative selection response indicated that information on single nucleotide polymorphism (SNP) genotypes considerably increases the accuracy of breeding values estimated for young horses without own or progeny performance. In a first scenario, the correlation between the breeding value estimated from the SNP genotype and the true breeding value (= accuracy of GEBV) was fixed to a relatively low value of r(mg) = 0.5. For a low heritability trait (h(2) = 0.15), and an index for a young horse based only on information from both parents, additional genomic information doubles r(TI) from 0.27 to 0.54. Including the conventional information source 'own performance' into the before mentioned index, additional SNP information increases r(TI) by 40%. Thus, particularly with regard to traits of low heritability, genomic information can provide a tool for well-founded selection decisions early in life. In a further approach, different sources of breeding values (e.g. GEBV and estimated breeding values (EBVs) from different countries) were combined into an overall index when altering accuracies of EBVs and correlations between traits. In summary, we showed that genomic selection strategies have the potential to contribute to a substantial reduction in generation intervals in horse breeding programs.
Accuracy increase of self-compensator
NASA Astrophysics Data System (ADS)
Zhambalova, S. Ts; Vinogradova, A. A.
2018-03-01
In this paper, the authors consider a self-compensation system and a method for increasing its accuracy, without compromising the condition of the information theory of measuring devices. The result can be achieved using the pulse control of the tracking system in the dead zone (the zone of the proportional section of the amplifier's characteristic). Pulse control allows one to increase the control power, but the input signal of the amplifier is infinitesimal. To do this, the authors use the conversion scheme for the input quantity. It is also possible to reduce the dead band, but the system becomes unstable. The amount of information received from the instrument, correcting circuits complicates the system, and, reducing the feedback coefficient dramatically, reduces the speed. Thanks to this, without compromising the measurement condition, the authors increase the accuracy of the self-compensation system. The implementation technique allows increasing the power of the input signal by many orders of magnitude.
The effect of using genealogy-based haplotypes for genomic prediction
2013-01-01
Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971
The effect of using genealogy-based haplotypes for genomic prediction.
Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt
2013-03-06
Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.
TDRSS Onboard Navigation System (TONS) experiment for the Explorer Platform (EP)
NASA Astrophysics Data System (ADS)
Gramling, C. J.; Hornstein, R. S.; Long, A. C.; Samii, M. V.; Elrod, B. D.
A TDRSS Onboard Navigation System (TONS) is currently being developed by NASA to provide a high-accuracy autonomous spacecraft navigation capability for users of TDRSS and its successor, the Advanced TDRSS. A TONS experiment will be performed in conjunction with the Explorer Platform (EP)/EUV Explorer mission to flight-qualify TONS Block I. This paper presents an overview of TDRSS on-board navigation goals and plans and the technical objectives of the TONS experiment. The operations concept of the experiment is described, including the characteristics of the ultrastable oscillator, the Doppler extractor, the signal-acquisition process, the TONS ground-support system, and the navigation flight software. A description of the on-board navigation algorithms and the rationale for their selection is also presented.
Zimmermann, N.E.; Edwards, T.C.; Moisen, Gretchen G.; Frescino, T.S.; Blackard, J.A.
2007-01-01
1. Compared to bioclimatic variables, remote sensing predictors are rarely used for predictive species modelling. When used, the predictors represent typically habitat classifications or filters rather than gradual spectral, surface or biophysical properties. Consequently, the full potential of remotely sensed predictors for modelling the spatial distribution of species remains unexplored. Here we analysed the partial contributions of remotely sensed and climatic predictor sets to explain and predict the distribution of 19 tree species in Utah. We also tested how these partial contributions were related to characteristics such as successional types or species traits. 2. We developed two spatial predictor sets of remotely sensed and topo-climatic variables to explain the distribution of tree species. We used variation partitioning techniques applied to generalized linear models to explore the combined and partial predictive powers of the two predictor sets. Non-parametric tests were used to explore the relationships between the partial model contributions of both predictor sets and species characteristics. 3. More than 60% of the variation explained by the models represented contributions by one of the two partial predictor sets alone, with topo-climatic variables outperforming the remotely sensed predictors. However, the partial models derived from only remotely sensed predictors still provided high model accuracies, indicating a significant correlation between climate and remote sensing variables. The overall accuracy of the models was high, but small sample sizes had a strong effect on cross-validated accuracies for rare species. 4. Models of early successional and broadleaf species benefited significantly more from adding remotely sensed predictors than did late seral and needleleaf species. The core-satellite species types differed significantly with respect to overall model accuracies. Models of satellite and urban species, both with low prevalence, benefited more from use of remotely sensed predictors than did the more frequent core species. 5. Synthesis and applications. If carefully prepared, remotely sensed variables are useful additional predictors for the spatial distribution of trees. Major improvements resulted for deciduous, early successional, satellite and rare species. The ability to improve model accuracy for species having markedly different life history strategies is a crucial step for assessing effects of global change. ?? 2007 The Authors.
ZIMMERMANN, N E; EDWARDS, T C; MOISEN, G G; FRESCINO, T S; BLACKARD, J A
2007-01-01
Compared to bioclimatic variables, remote sensing predictors are rarely used for predictive species modelling. When used, the predictors represent typically habitat classifications or filters rather than gradual spectral, surface or biophysical properties. Consequently, the full potential of remotely sensed predictors for modelling the spatial distribution of species remains unexplored. Here we analysed the partial contributions of remotely sensed and climatic predictor sets to explain and predict the distribution of 19 tree species in Utah. We also tested how these partial contributions were related to characteristics such as successional types or species traits. We developed two spatial predictor sets of remotely sensed and topo-climatic variables to explain the distribution of tree species. We used variation partitioning techniques applied to generalized linear models to explore the combined and partial predictive powers of the two predictor sets. Non-parametric tests were used to explore the relationships between the partial model contributions of both predictor sets and species characteristics. More than 60% of the variation explained by the models represented contributions by one of the two partial predictor sets alone, with topo-climatic variables outperforming the remotely sensed predictors. However, the partial models derived from only remotely sensed predictors still provided high model accuracies, indicating a significant correlation between climate and remote sensing variables. The overall accuracy of the models was high, but small sample sizes had a strong effect on cross-validated accuracies for rare species. Models of early successional and broadleaf species benefited significantly more from adding remotely sensed predictors than did late seral and needleleaf species. The core-satellite species types differed significantly with respect to overall model accuracies. Models of satellite and urban species, both with low prevalence, benefited more from use of remotely sensed predictors than did the more frequent core species. Synthesis and applications. If carefully prepared, remotely sensed variables are useful additional predictors for the spatial distribution of trees. Major improvements resulted for deciduous, early successional, satellite and rare species. The ability to improve model accuracy for species having markedly different life history strategies is a crucial step for assessing effects of global change. PMID:18642470
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City
2016-01-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast. PMID:27855155
Overestimation of threat from neutral faces and voices in social anxiety.
Peschard, Virginie; Philippot, Pierre
2017-12-01
Social anxiety (SA) is associated with a tendency to interpret social information in a more threatening manner. Most of the research in SA has focused on unimodal exploration (mostly based on facial expressions), thus neglecting the ubiquity of cross-modality. To fill this gap, the present study sought to explore whether SA influences the interpretation of facial and vocal expressions presented separately or jointly. Twenty-five high socially anxious (HSA) and 29 low socially anxious (LSA) participants completed a forced two-choice emotion identification task consisting of angry and neutral expressions conveyed by faces, voices or combined faces and voices. Participants had to identify the emotion (angry or neutral) of the presented cues as quickly and precisely as possible. Our results showed that, compared to LSA, HSA individuals show a higher propensity to misattribute anger to neutral expressions independent of cue modality and despite preserved decoding accuracy. We also found a cross-modal facilitation effect at the level of accuracy (i.e., higher accuracy in the bimodal condition compared to unimodal ones). However, such effect was not moderated by SA. Although the HSA group showed clinical cut-off scores at the Liebowitz Social Anxiety Scale, one limitation is that we did not administer diagnostic interviews. Upcoming studies may want to test whether these results can be generalized to a clinical population. These findings highlight the usefulness of a cross-modal perspective to probe the specificity of biases in SA. Copyright © 2017 Elsevier Ltd. All rights reserved.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
Regression Analysis of Optical Coherence Tomography Disc Variables for Glaucoma Diagnosis.
Richter, Grace M; Zhang, Xinbo; Tan, Ou; Francis, Brian A; Chopra, Vikas; Greenfield, David S; Varma, Rohit; Schuman, Joel S; Huang, David
2016-08-01
To report diagnostic accuracy of optical coherence tomography (OCT) disc variables using both time-domain (TD) and Fourier-domain (FD) OCT, and to improve the use of OCT disc variable measurements for glaucoma diagnosis through regression analyses that adjust for optic disc size and axial length-based magnification error. Observational, cross-sectional. In total, 180 normal eyes of 112 participants and 180 eyes of 138 participants with perimetric glaucoma from the Advanced Imaging for Glaucoma Study. Diagnostic variables evaluated from TD-OCT and FD-OCT were: disc area, rim area, rim volume, optic nerve head volume, vertical cup-to-disc ratio (CDR), and horizontal CDR. These were compared with overall retinal nerve fiber layer thickness and ganglion cell complex. Regression analyses were performed that corrected for optic disc size and axial length. Area-under-receiver-operating curves (AUROC) were used to assess diagnostic accuracy before and after the adjustments. An index based on multiple logistic regression that combined optic disc variables with axial length was also explored with the aim of improving diagnostic accuracy of disc variables. Comparison of diagnostic accuracy of disc variables, as measured by AUROC. The unadjusted disc variables with the highest diagnostic accuracies were: rim volume for TD-OCT (AUROC=0.864) and vertical CDR (AUROC=0.874) for FD-OCT. Magnification correction significantly worsened diagnostic accuracy for rim variables, and while optic disc size adjustments partially restored diagnostic accuracy, the adjusted AUROCs were still lower. Axial length adjustments to disc variables in the form of multiple logistic regression indices led to a slight but insignificant improvement in diagnostic accuracy. Our various regression approaches were not able to significantly improve disc-based OCT glaucoma diagnosis. However, disc rim area and vertical CDR had very high diagnostic accuracy, and these disc variables can serve to complement additional OCT measurements for diagnosis of glaucoma.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
NASA Astrophysics Data System (ADS)
Kedar, S.; Bock, Y.; Webb, F.; Clayton, R. W.; Owen, S. E.; Moore, A. W.; Yu, E.; Dong, D.; Fang, P.; Jamason, P.; Squibb, M. B.; Crowell, B. W.
2010-12-01
In situ geodetic networks for observing crustal motion have proliferated over the last two decades and are now recognized as indispensable tools in geophysical research, along side more traditional seismic networks. The 2007 National Research Council’s Decadal Survey recognizes that space-borne and in situ observations, such as Interferometric Synthetic Aperture Radar (InSAR) and ground-based continuous GPS (CGPS) are complementary in forecasting, in assessing, and in mitigating natural hazards. However, the information content and timeliness of in situ geodetic observations have not been fully exploited, particularly at higher frequencies than traditional daily CGPS position time series. Nor have scientists taken full advantage of the complementary natures of geodetic and seismic data, as well as those of space-based and in situ observations. To address these deficits we are developing real-time CGPS data products for earthquake early warning and for space-borne deformation measurement mission support. Our primary mission objective is in situ verification and validation for DESDynI, but our work is also applicable to other international missions (Sentinel 1a/1b, SAOCOM, ALOS 2). Our project is developing new capabilities to continuously observe and mitigate earthquake-related hazards (direct seismic damage, tsunamis, landslides, volcanoes) in near real-time with high spatial-temporal resolution, to improve the planning and accuracy of space-borne observations. We also are using GPS estimates of tropospheric zenith delay combined with water vapor data from weather models to generate tropospheric calibration maps for mitigating the largest source of error, atmospheric artifacts, in InSAR interferograms. These functions will be fully integrated into a Geophysical Resource Web Services and interactive GPS Explorer data portal environment being developed as part of an ongoing MEaSUREs project and NASA’s contribution to the EarthScope project. GPS Explorer, originally designed for web-based dissemination of long-term Solid Earth Science Data Records (ESDR’s) such as deformation time series, tectonic velocity vectors, and strain maps, provides the framework for seamless inclusion of the high rate data products. Detection and preliminary modeling of interesting signals by dense real-time high-rate ground networks will allow mission planners and decision makers to fully exploit the less-frequent but higher resolution InSAR observations. Fusion of in situ elements into an advanced observation system will significantly improve the scientific value of extensive surface displacement data, provide scientists with improved access to modern software tools to manipulate and model these data, increase the data’s accuracy and timeliness at higher frequencies than available from space-based observations, and increase the accuracy of space-based observations through calibration of atmospheric and other systematic errors.
Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.
Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying
2017-12-21
Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.
Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse.
Garrard, Peter; Rentoumi, Vassiliki; Gesierich, Benno; Miller, Bruce; Gorno-Tempini, Maria Luisa
2014-06-01
Advances in automatic text classification have been necessitated by the rapid increase in the availability of digital documents. Machine learning (ML) algorithms can 'learn' from data: for instance a ML system can be trained on a set of features derived from written texts belonging to known categories, and learn to distinguish between them. Such a trained system can then be used to classify unseen texts. In this paper, we explore the potential of the technique to classify transcribed speech samples along clinical dimensions, using vocabulary data alone. We report the accuracy with which two related ML algorithms [naive Bayes Gaussian (NBG) and naive Bayes multinomial (NBM)] categorized picture descriptions produced by: 32 semantic dementia (SD) patients versus 10 healthy, age-matched controls; and SD patients with left- (n = 21) versus right-predominant (n = 11) patterns of temporal lobe atrophy. We used information gain (IG) to identify the vocabulary features that were most informative to each of these two distinctions. In the SD versus control classification task, both algorithms achieved accuracies of greater than 90%. In the right- versus left-temporal lobe predominant classification, NBM achieved a high level of accuracy (88%), but this was achieved by both NBM and NBG when the features used in the training set were restricted to those with high values of IG. The most informative features for the patient versus control task were low frequency content words, generic terms and components of metanarrative statements. For the right versus left task the number of informative lexical features was too small to support any specific inferences. An enriched feature set, including values derived from Quantitative Production Analysis (QPA) may shed further light on this little understood distinction. Copyright © 2013 Elsevier Ltd. All rights reserved.
Utility and potential of rapid epidemic intelligence from internet-based sources.
Yan, S J; Chughtai, A A; Macintyre, C R
2017-10-01
Rapid epidemic detection is an important objective of surveillance to enable timely intervention, but traditional validated surveillance data may not be available in the required timeframe for acute epidemic control. Increasing volumes of data on the Internet have prompted interest in methods that could use unstructured sources to enhance traditional disease surveillance and gain rapid epidemic intelligence. We aimed to summarise Internet-based methods that use freely-accessible, unstructured data for epidemic surveillance and explore their timeliness and accuracy outcomes. Steps outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist were used to guide a systematic review of research related to the use of informal or unstructured data by Internet-based intelligence methods for surveillance. We identified 84 articles published between 2006-2016 relating to Internet-based public health surveillance methods. Studies used search queries, social media posts and approaches derived from existing Internet-based systems for early epidemic alerts and real-time monitoring. Most studies noted improved timeliness compared to official reporting, such as in the 2014 Ebola epidemic where epidemic alerts were generated first from ProMED-mail. Internet-based methods showed variable correlation strength with official datasets, with some methods showing reasonable accuracy. The proliferation of publicly available information on the Internet provided a new avenue for epidemic intelligence. Methodologies have been developed to collect Internet data and some systems are already used to enhance the timeliness of traditional surveillance systems. To improve the utility of Internet-based systems, the key attributes of timeliness and data accuracy should be included in future evaluations of surveillance systems. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Quantitative phase imaging to improve the diagnostic accuracy of urine cytology.
Pham, Hoa V; Pantanowitz, Liron; Liu, Yang
2016-09-01
A definitive diagnosis of urothelial carcinoma in urine cytology is often challenging and subjective. Many urine cytology samples receive an indeterminate diagnosis. Ancillary techniques such as fluorescence in situ hybridization (FISH) have been used to improve the diagnostic sensitivity, but FISH is not approved as a routine screening test, and the complex fluorescent staining protocol also limits its widespread clinical use. Quantitative phase imaging (QPI) is an emerging technology allowing accurate measurements of the single-cell dry mass. This study was undertaken to explore the ability of QPI to improve the diagnostic accuracy of urine cytology for malignancy. QPI was performed on unstained, ThinPrep-prepared urine cytology slides from 28 patients with 4 categories of cytological diagnoses (negative, atypical, suspicious, and positive for malignancy). The nuclear/cell dry mass, the entropy, and the nucleus-to-cell mass ratio were calculated for several hundred cells for each patient, and they were then correlated with the follow-up diagnoses. The nuclear mass and nuclear mass entropy of urothelial cells showed significant differences between negative and positive groups. These data showed a progressive increase from patients with negative diagnosis, to patients with atypical/suspicious and positive cytologic diagnosis. Most importantly, among the patients in the atypical and suspicious diagnosis, the nuclear mass and its entropy were significantly higher for those patients with a follow-up diagnosis of malignancy than those patients without a subsequent follow-up diagnosis of malignancy. QPI shows potential for improving the diagnostic accuracy of urine cytology, especially for indeterminate cases, and should be further evaluated as an ancillary test for urine cytology. Cancer Cytopathol 2016;124:641-50. © 2016 American Cancer Society. © 2016 American Cancer Society.
Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert
2016-01-01
Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040
Marraccini, Marisa E; Weyandt, Lisa L; Rossi, Joseph S; Gudmundsdottir, Bergljot Gyda
2016-08-01
Increasing numbers of adults, particularly college students, are misusing prescription stimulants primarily for cognitive/academic enhancement, so it is critical to explore whether empirical findings support neurocognitive benefits of prescription stimulants. Previous meta-analytic studies have supported small benefits from prescription stimulants for the cognitive domains of inhibitory control and memory; however, no meta-analytic studies have examined the effects on processing speed or the potential impairment on other domains of cognition, including planning, decision-making, and cognitive perseveration. Therefore, the present study conducted a meta-analysis of the available literature examining the effects of prescription stimulants on specific measures of processing speed, planning, decision-making, and cognitive perseveration among healthy adult populations. The meta-analysis results indicated a positive influence of prescription stimulant medication on processing speed accuracy, with an overall mean effect size of g = 0.282 (95% CI [0.077, 0.488]; n = 345). Neither improvements nor impairments were revealed for planning time, planning accuracy, advantageous decision-making, or cognitive perseveration; however, findings are limited by the small number of studies examining these outcomes. Findings support that prescription stimulant medication may indeed act as a neurocognitive enhancer for accuracy measures of processing speed without impeding other areas of cognition. Considering that adults are already engaging in illegal use of prescription stimulants for academic enhancement, as well as the potential for stimulant misuse to have serious side effects, the establishment of public policies informed by interdisciplinary research surrounding this issue, whether restrictive or liberal, is of critical importance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Initial development of high-accuracy CFRP panel for DATE5 antenna
NASA Astrophysics Data System (ADS)
Qian, Yuan; Lou, Zheng; Hao, Xufeng; Zhu, Jing; Cheng, Jingquan; Wang, Hairen; Zuo, Yingxi; Yang, Ji
2016-07-01
DATE5 antenna, which is a 5m telescope for terahertz exploration, will be sited at Dome A, Antarctica. It is necessary to keep high surface accuracy of the primary reflector panels so that high observing efficiency can be achieved. In antenna field, carbon fiber reinforced composite (CFRP) sandwich panels are widely used as these panels are light in weight, high in strength, low in thermal expansion, and cheap in mass fabrication. In DATE5 project, CFRP panels are important panel candidates. In the design study phase, a CFRP prototype panel of 1-meter size is initially developed for the verification purpose. This paper introduces the material arrangement in the sandwich panel, measured performance of this testing sandwich structure samples, and together with the panel forming process. For anti-icing in the South Pole region, a special CFRP heating film is embedded in the front skin of sandwich panel. The properties of some types of basic building materials are tested. Base on the results, the deformation of prototype panel with different sandwich structures and skin layers are simulated and a best structural concept is selected. The panel mold used is a high accuracy one with a surface rms error of 1.4 μm. Prototype panels are replicated from the mold. Room temperature curing resin is used to reduce the thermal deformation in the resin transfer process. In the curing, vacuum negative pressure technology is also used to increase the volume content of carbon fiber. After the measurement of the three coordinate measure machine (CMM), a prototype CFRP panel of 5.1 μm rms surface error is developed initially.
Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert
2016-01-01
Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.
High accuracy autonomous navigation using the global positioning system (GPS)
NASA Technical Reports Server (NTRS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
The High Energy Transient Explorer (HETE): Mission and Science Overview
NASA Astrophysics Data System (ADS)
Ricker, G. R.; Atteia, J.-L.; Crew, G. B.; Doty, J. P.; Fenimore, E. E.; Galassi, M.; Graziani, C.; Hurley, K.; Jernigan, J. G.; Kawai, N.; Lamb, D. Q.; Matsuoka, M.; Pizzichini, G.; Shirasaki, Y.; Tamagawa, T.; Vanderspek, R.; Vedrenne, G.; Villasenor, J.; Woosley, S. E.; Yoshida, A.
2003-04-01
The High Energy Transient Explorer (HETE ) mission is devoted to the study of gamma-ray bursts (GRBs) using soft X-ray, medium X-ray, and gamma-ray instruments mounted on a compact spacecraft. The HETE satellite was launched into equatorial orbit on 9 October 2000. A science team from France, Japan, Brazil, India, Italy, and the US is responsible for the HETE mission, which was completed for ~ 1/3 the cost of a NASA Small Explorer (SMEX). The HETE mission is unique in that it is entirely ``self-contained,'' insofar as it relies upon dedicated tracking, data acquisition, mission operations, and data analysis facilities run by members of its international Science Team. A powerful feature of HETE is its potential for localizing GRBs within seconds of the trigger with good precision (~ 10') using medium energy X-rays and, for a subset of bright GRBs, improving the localization to ~ 30''accuracy using low energy X-rays. Real-time GRB localizations are transmitted to ground observers within seconds via a dedicated network of 14 automated ``Burst Alert Stations,'' thereby allowing prompt optical, IR, and radio follow-up, leading to the identification of counterparts for a large fraction of HETE -localized GRBs. HETE is the only satellite that can provide near-real time localizations of GRBs, and that can localize GRBs that do not have X-ray, optical, and radio afterglows, during the next two years. These capabilities are the key to allowing HETE to probe further the unique physics that produces the brightest known photon sources in the universe. To date (December 2002), HETE has produced 31 GRB localizations. Localization accuracies are routinely in the 4'- 20' range; for the five GRBs with SXC localization, accuracies are ~1-2'. In addition, HETE has detected ~ 25 bursts from soft gamma repeaters (SGRs), and >600 X-ray bursts (XRBs).
Texture- and deformability-based surface recognition by tactile image analysis.
Khasnobish, Anwesha; Pal, Monalisa; Tibarewala, D N; Konar, Amit; Pal, Kunal
2016-08-01
Deformability and texture are two unique object characteristics which are essential for appropriate surface recognition by tactile exploration. Tactile sensation is required to be incorporated in artificial arms for rehabilitative and other human-computer interface applications to achieve efficient and human-like manoeuvring. To accomplish the same, surface recognition by tactile data analysis is one of the prerequisites. The aim of this work is to develop effective technique for identification of various surfaces based on deformability and texture by analysing tactile images which are obtained during dynamic exploration of the item by artificial arms whose gripper is fitted with tactile sensors. Tactile data have been acquired, while human beings as well as a robot hand fitted with tactile sensors explored the objects. The tactile images are pre-processed, and relevant features are extracted from the tactile images. These features are provided as input to the variants of support vector machine (SVM), linear discriminant analysis and k-nearest neighbour (kNN) for classification. Based on deformability, six household surfaces are recognized from their corresponding tactile images. Moreover, based on texture five surfaces of daily use are classified. The method adopted in the former two cases has also been applied for deformability- and texture-based recognition of four biomembranes, i.e. membranes prepared from biomaterials which can be used for various applications such as drug delivery and implants. Linear SVM performed best for recognizing surface deformability with an accuracy of 83 % in 82.60 ms, whereas kNN classifier recognizes surfaces of daily use having different textures with an accuracy of 89 % in 54.25 ms and SVM with radial basis function kernel recognizes biomembranes with an accuracy of 78 % in 53.35 ms. The classifiers are observed to generalize well on the unseen test datasets with very high performance to achieve efficient material recognition based on its deformability and texture.
Simmons, Joseph P; LeBoeuf, Robyn A; Nelson, Leif D
2010-12-01
Increasing accuracy motivation (e.g., by providing monetary incentives for accuracy) often fails to increase adjustment away from provided anchors, a result that has led researchers to conclude that people do not effortfully adjust away from such anchors. We challenge this conclusion. First, we show that people are typically uncertain about which way to adjust from provided anchors and that this uncertainty often causes people to believe that they have initially adjusted too far away from such anchors (Studies 1a and 1b). Then, we show that although accuracy motivation fails to increase the gap between anchors and final estimates when people are uncertain about the direction of adjustment, accuracy motivation does increase anchor-estimate gaps when people are certain about the direction of adjustment, and that this is true regardless of whether the anchors are provided or self-generated (Studies 2, 3a, 3b, and 5). These results suggest that people do effortfully adjust away from provided anchors but that uncertainty about the direction of adjustment makes that adjustment harder to detect than previously assumed. This conclusion has important theoretical implications, suggesting that currently emphasized distinctions between anchor types (self-generated vs. provided) are not fundamental and that ostensibly competing theories of anchoring (selective accessibility and anchoring-and-adjustment) are complementary. PsycINFO Database Record (c) 2010 APA, all rights reserved.
A universal deep learning approach for modeling the flow of patients under different severities.
Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L
2018-02-01
The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.
There's a Bug in Your Ear!: Using Technology to Increase the Accuracy of DTT Implementation
ERIC Educational Resources Information Center
McKinney, Tracy; Vasquez, Eleazar, III.
2014-01-01
Many professionals have successfully implemented discrete trial teaching in the past. However, there have not been extensive studies examining the accuracy of discrete trial teaching implementation. This study investigated the use of Bug in Ear feedback on the accuracy of discrete trial teaching implementation among two pre-service teachers…
The Influence of Delaying Judgments of Learning on Metacognitive Accuracy: A Meta-Analytic Review
ERIC Educational Resources Information Center
Rhodes, Matthew G.; Tauber, Sarah K.
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the…
NASA Astrophysics Data System (ADS)
Kireev, S. V.; Simanovsky, I. G.; Shnyrev, S. L.
2010-12-01
The study is aimed at an increase in the accuracy of the optical method for the detection of the iodine-containing substances in technological liquids resulting form the processing of the waste nuclear fuel. It is demonstrated that the accuracy can be increased owing to the measurements at various combinations of wavelengths depending on the concentrations of impurities that are contained in the sample under study and absorb in the spectral range used for the detection of the iodine-containing substances.
Increasing Speed of Processing With Action Video Games
Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne
2010-01-01
In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance. PMID:20485453
Valuation of exotic options in the framework of Levy processes
NASA Astrophysics Data System (ADS)
Milev, Mariyan; Georgieva, Svetla; Markovska, Veneta
2013-12-01
In this paper we explore a straightforward procedure to price derivatives by using the Monte Carlo approach when the underlying process is a jump-diffusion. We have compared the Black-Scholes model with one of its extensions that is the Merton model. The latter model is better in capturing the market's phenomena and is comparative to stochastic volatility models in terms of pricing accuracy. We have presented simulations of asset paths and pricing of barrier options for both Geometric Brownian motion and exponential Levy processes as it is the concrete case of the Merton model. A desired level of accuracy is obtained with simple computer operations in MATLAB for efficient computational time.
NASA Astrophysics Data System (ADS)
Susanti, Yuliana; Zukhronah, Etik; Pratiwi, Hasih; Respatiwulan; Sri Sulistijowati, H.
2017-11-01
To achieve food resilience in Indonesia, food diversification by exploring potentials of local food is required. Corn is one of alternating staple food of Javanese society. For that reason, corn production needs to be improved by considering the influencing factors. CHAID and CRT are methods of data mining which can be used to classify the influencing variables. The present study seeks to dig up information on the potentials of local food availability of corn in regencies and cities in Java Island. CHAID analysis yields four classifications with accuracy of 78.8%, while CRT analysis yields seven classifications with accuracy of 79.6%.
A First Look at the Navigation Design and Analysis for the Orion Exploration Mission 2
NASA Technical Reports Server (NTRS)
D'Souza, Chris D.; Zenetti, Renato
2017-01-01
This paper will detail the navigation and dispersion design and analysis of the first Orion crewed mission. The optical navigation measurement model will be described. The vehicle noise includes the residual acceleration from attitude deadbanding, attitude maneuvers, CO2 venting, wastewater venting, ammonia sublimator venting and solar radiation pressure. The maneuver execution errors account for the contribution of accelerometer scale-factor on the accuracy of the maneuver execution. Linear covariance techniques are used to obtain the navigation errors and the trajectory dispersions as well as the DV performance. Particular attention will be paid to the accuracy of the delivery at Earth Entry Interface and at the Lunar Flyby.
A method for cone fitting based on certain sampling strategy in CMM metrology
NASA Astrophysics Data System (ADS)
Zhang, Li; Guo, Chaopeng
2018-04-01
A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.
SIM Interferometer Testbed (SCDU) Status and Recent Results
NASA Technical Reports Server (NTRS)
Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.;
2010-01-01
SIM Lite is a space-borne stellar interferometer capable of searching for Earth-size planets in the habitable zones of nearby stars. This search will require measurement of astrometric angles with sub micro-arcsecond accuracy and optical pathlength differences to 1 picometer by the end of the five-year mission. One of the most significant technical risks in achieving this level of accuracy is from systematic errors that arise from spectral differences between candidate stars and nearby reference stars. The Spectral Calibration Development Unit (SCDU), in operation since 2007, has been used to explore this effect and demonstrate performance meeting SIM goals. In this paper we present the status of this testbed and recent results.