Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
ERIC Educational Resources Information Center
Farrokhi, Farahman; Sattarpour, Simin
2012-01-01
The present article reports the findings of a study that explored(1) whether direct written corrective feedback (CF) can help high-proficient L2 learners, who has already achieved a rather high level of accuracy in English, improve in the accurate use of two functions of English articles (the use of "a" for first mention and…
NASA Technical Reports Server (NTRS)
Gramling, C. J.; Long, A. C.; Lee, T.; Ottenstein, N. A.; Samii, M. V.
1991-01-01
A Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) is currently being developed by NASA to provide a high accuracy autonomous navigation capability for users of TDRSS and its successor, the Advanced TDRSS (ATDRSS). The fully autonomous user onboard navigation system will support orbit determination, time determination, and frequency determination, based on observation of a continuously available, unscheduled navigation beacon signal. A TONS experiment will be performed in conjunction with the Explorer Platform (EP) Extreme Ultraviolet Explorer (EUVE) mission to flight quality TONS Block 1. An overview is presented of TONS and a preliminary analysis of the navigation accuracy anticipated for the TONS experiment. Descriptions of the TONS experiment and the associated navigation objectives, as well as a description of the onboard navigation algorithms, are provided. The accuracy of the selected algorithms is evaluated based on the processing of realistic simulated TDRSS one way forward link Doppler measurements. The analysis process is discussed and the associated navigation accuracy results are presented.
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning
NASA Technical Reports Server (NTRS)
Hill, Gerald M.
1997-01-01
To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.
Social Understanding of High-Ability Children in Middle and Late Childhood
ERIC Educational Resources Information Center
Boor-Klip, Henrike J.; Cillessen, Antonius H. N.; van Hell, Janet G.
2014-01-01
Despite its importance in social development, social understanding has hardly been studied in high-ability children. This study explores differences in social understanding between children in high-ability and regular classrooms, specifically theory of mind (ToM) and perception accuracy, as well as associations between individual characteristics…
Using the MMPI 168 with Medical Inpatients
ERIC Educational Resources Information Center
Erickson, Richard C.; Freeman, Charles
1976-01-01
Explores the potential utility of the MMPI 168 with two inpatient medical populations. Correlations and clinically relevant comparisons suggest that the MMPI 168 predicted the standard MMPI with a high degree accuracy. (Editor/RK)
Performance of the NASA Digitizing Core-Loss Instrumentation
NASA Technical Reports Server (NTRS)
Schwarze, Gene E. (Technical Monitor); Niedra, Janis M.
2003-01-01
The standard method of magnetic core loss measurement was implemented on a high frequency digitizing oscilloscope in order to explore the limits to accuracy when characterizing high Q cores at frequencies up to 1 MHz. This method computes core loss from the cycle mean of the product of the exciting current in a primary winding and induced voltage in a separate flux sensing winding. It is pointed out that just 20 percent accuracy for a Q of 100 core material requires a phase angle accuracy of 0.1 between the voltage and current measurements. Experiment shows that at 1 MHz, even high quality, high frequency current sensing transformers can introduce phase errors of a degree or more. Due to the fact that the Q of some quasilinear core materials can exceed 300 at frequencies below 100 kHz, phase angle errors can be a problem even at 50 kHz. Hence great care is necessary with current sensing and ground loops when measuring high Q cores. Best high frequency current sensing accuracy was obtained from a fabricated 0.1-ohm coaxial resistor, differentially sensed. Sample high frequency core loss data taken with the setup for a permeability-14 MPP core is presented.
Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Sentence Processing in High Proficient Kannada--English Bilinguals: A Reaction Time Study
ERIC Educational Resources Information Center
Ravi, Sunil Kumar; Chengappa, Shyamala K.
2015-01-01
The present study aimed at exploring the semantic and syntactic processing differences between native and second languages in 20 early high proficient Kannada--English bilingual adults through accuracy and reaction time (RT) measurements. Subjects participated in a semantic judgement task (using 50 semantically correct and 50 semantically…
NASA Astrophysics Data System (ADS)
Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju
2018-02-01
Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.
Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R
2010-09-29
Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process--i.e. speed, accuracy and cohesion--and partly circumvent the speed-accuracy trade-off classically observed during emigrations. These findings should be taken into account in future studies on speed-accuracy trade-offs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallstrom, Jason O.; Ni, Zheng Richard
This STTR Phase I project assessed the feasibility of a new CO 2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO 2 concentrations, as well as themore » electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO 2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO 2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO 2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States.« less
Development and experiment of a broadband seismograph for deep exploration
NASA Astrophysics Data System (ADS)
Zhang, H.; Lin, J.; Yang, H.; Zheng, F.; Zhang, L.; Chen, Z.
2012-12-01
Seismic surveying is the most important type of deep exploration and oil-gas exploration. In order to obtain the high-quality deeper strata information in the deep exploration, large amount of drugs, large group interval and the low-frequency detector must be used, the length of the measuring line is usually tens of kilometers or even hundreds of kilometers. Conventional seismic exploration instrument generally do not have site storage function or limited storage capacity, due to the shackles of the transmission cable, the system bulky and difficult to handle, inefficient construction, high labor costs, collection capabilities and accuracy are the drawbacks of restrictions. This article describes a deep exploration of high-performance broadband seismograph. To ensure the quality of data acquisition, the 24-bit ADCs applied and the low noise analog front end circuit designed carefully, which enable the instrument noise level less than 1.5uV and the dynamic range over 120dB. Integrate dual-frequency GPS OEM board with the acquisition station. As a result, the acquisition station itself can make a static self-positioning and the horizontal accuracy can reach to centimeter-level. Furthermore, it can provide high accuracy position data to subsequent seismic data processing. Combine the precise timing system of GPS with digital clock that has high precision oven-controlled crystal oscillator (OCXO). It enables the accuracy of clock synchronization to reach 0.01ms and the stability of OCXO frequency reach 3e-8, which could solve the problems of synchronous triggering of the data acquisition unit of multiple recording units in the instrument and real-time calibration of the inaccuracy of system clock. The instrument uses a high-capacity (large than 16GB/station), high reliability of the seismic data storage solutions, which enables the instrument to record continuously for more than 138 hours at the sampling rate of 2000sps. Using low-power design techniques for power management in ether hardware or software, the average power consumption reached 2 watts, within a high-capacity lithium battery inside, the seismograph can work 80 hours continuously. With a internal 24-bit DAC and the FPGA control logic, a series of self-test items are achieved, including: noise level, the crosstalk between channels, common mode rejection ratio, harmonic distortion, detector impedance, impulse response, the gain calibration etc. Because the instrument Integrates a WIFI module inside, the instrument status and the quality of data acquisition can be real-time monitoring via a hand-held terminals. In order to verify the reliability and validity of the instrument, a deep seismic exploration research using the instruments provided in this article carried out in a certain area, 32 broadband seismograph were placed in the 120 km-long measure line (place one at intervals of about 4 km), to record the source signal far from a few hundred kilometers away. Experimental results show that performance of analog acquisition channels of the introduced instrument could reach the international advanced level. However, the non-cable designing makes the instrument get rid of the bulky cables and fulfill the target to lighten seismic instruments, which could definitely improve working efficiency, save surveying cost and be helpful to the work in the condition of complex geographical and geological environment.
Key Skills for Science Learning: The Importance of Text Cohesion and Reading Ability
ERIC Educational Resources Information Center
Hall, Sophie Susannah; Maltby, John; Filik, Ruth; Paterson, Kevin B.
2016-01-01
To explore the importance of text cohesion, we conducted two experiments. We measured online (reading times) and offline (comprehension accuracy) processes for texts that were high and low cohesion. In study one (n?=?60), we manipulated referential cohesion using noun repetition (high cohesion) and synonymy (low cohesion). Students showed enhanced…
Exploring a Three-Level Model of Calibration Accuracy
ERIC Educational Resources Information Center
Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.
2014-01-01
We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
FakhreYaseri, Hashem; FakhreYaseri, Ali Mohammad; Baradaran Moghaddam, Ali; Soltani Arabshhi, Seyed Kamran
2015-01-01
Manometry is the gold-standard diagnostic test for motility disorders in the esophagus. The development of high-resolution manometry catheters and software displays of manometry recordings in color-coded pressure plots have changed the diagnostic assessment of esophageal disease. The diagnostic value of particular esophageal clinical symptoms among patients suspected of esophageal motor disorders (EMDs) is still unknown. The aim of this study was to explore the sensitivity, specificity, and predictive accuracy of presenting esophageal symptoms between abnormal and normal esophageal manometry findings. We conducted a cross-sectional study of 623 patients aged 11-80 years. Data were collected from clinical examinations as well as patient questionnaires. The sensitivity, specificity, and accuracy were calculated after high-resolution manometry plots were reviewed according to the most recent Chicago Criteria. The clinical symptoms were not sensitive enough to discriminate between EMDs. Nevertheless, dysphagia, noncardiac chest pain, hoarseness, vomiting, and weight loss had high specificity and high accuracy to distinguish EMDs from normal findings. Regurgitation and heartburn did not have good accuracy for the diagnosis of EMDs. Clinical symptoms are not reliable enough to discriminate between EMDs. Clinical symptoms can, however, discriminate between normal findings and EMDs, especially achalasia.
JASMINE design and method of data reduction
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito
2008-07-01
Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.
Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition
Durmaz Incel, Ozlem
2015-01-01
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution. PMID:26445046
Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition.
Incel, Ozlem Durmaz
2015-10-05
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution.
A deep learning and novelty detection framework for rapid phenotyping in high-content screening
Sommer, Christoph; Hoefler, Rudolf; Samwer, Matthias; Gerlich, Daniel W.
2017-01-01
Supervised machine learning is a powerful and widely used method for analyzing high-content screening data. Despite its accuracy, efficiency, and versatility, supervised machine learning has drawbacks, most notably its dependence on a priori knowledge of expected phenotypes and time-consuming classifier training. We provide a solution to these limitations with CellCognition Explorer, a generic novelty detection and deep learning framework. Application to several large-scale screening data sets on nuclear and mitotic cell morphologies demonstrates that CellCognition Explorer enables discovery of rare phenotypes without user training, which has broad implications for improved assay development in high-content screening. PMID:28954863
Designing Delta-DOR acquisition strategies to determine highly elliptical earth orbits
NASA Technical Reports Server (NTRS)
Frauenholz, R. B.
1986-01-01
Delta-DOR acquisition strategies are designed for use in determining highly elliptical earth orbits. The requirements for a possible flight demonstration are evaluated for the Charged Composition Explorer spacecraft of the Active Magnetospheric Particle Tracer Explorers. The best-performing strategy uses data spanning the view periods of two orthogonal baselines near the same orbit periapse. The rapidly changing viewing geometry yields both angular position and velocity information, but each observation may require a different reference quasar. The Delta-DOR data noise is highly dependent on acquisition geometry, varying several orders of magnitude across the baseline view periods. Strategies are selected to minimize the measurement noise predicted by a theoretical model. Although the CCE transponder is limited by S-band and a small bandwidth, the addition of Delta-DOR to coherent Doppler and range improves the one-sigma apogee position accuracy by more than an order of magnitude. Additional Delta-DOR accuracy improvements possible using dual-frequency (S/X) calibration, increased spanned bandwidth, and water-vapor radiometry are presented for comparison. With these benefits, the residual Delta-DOR data noise is primarily due to quasar position uncertainties.
Global Lunar Topography from the Deep Space Gateway for Science and Exploration
NASA Astrophysics Data System (ADS)
Archinal, B.; Gaddis, L.; Kirk, R.; Edmundson, K.; Stone, T.; Portree, D.; Keszthelyi, L.
2018-02-01
The Deep Space Gateway, in low lunar orbit, could be used to achieve a long standing goal of lunar science, collecting stereo images in two months to make a complete, uniform, high resolution, known accuracy, global topographic model of the Moon.
Erby, Lori A H; Roter, Debra L; Biesecker, Barbara B
2011-11-01
To explore the accuracy and consistency of standardized patient (SP) performance in the context of routine genetic counseling, focusing on elements beyond scripted case items including general communication style and affective demeanor. One hundred seventy-seven genetic counselors were randomly assigned to counsel one of six SPs. Videotapes and transcripts of the sessions were analyzed to assess consistency of performance across four dimensions. Accuracy of script item presentation was high; 91% and 89% in the prenatal and cancer cases. However, there were statistically significant differences among SPs in the accuracy of presentation, general communication style, and some aspects of affective presentation. All SPs were rated as presenting with similarly high levels of realism. SP performance over time was generally consistent, with some small but statistically significant differences. These findings demonstrate that well-trained SPs can not only perform the factual elements of a case with high degrees of accuracy and realism; but they can also maintain sufficient levels of uniformity in general communication style and affective demeanor over time to support their use in even the demanding context of genetic counseling. Results indicate a need for an additional focus in training on consistency between different SPs. Copyright © 2010. Published by Elsevier Ireland Ltd.
Perrin, Maxine; Robillard, Manon; Roy-Charland, Annie
2017-12-01
This study examined eye movements during a visual search task as well as cognitive abilities within three age groups. The aim was to explore scanning patterns across symbol grids and to better understand the impact of symbol location in AAC displays on speed and accuracy of symbol selection. For the study, 60 students were asked to locate a series of symbols on 16 cell grids. The EyeLink 1000 was used to measure eye movements, accuracy, and response time. Accuracy was high across all cells. Participants had faster response times, longer fixations, and more frequent fixations on symbols located in the middle of the grid. Group comparisons revealed significant differences for accuracy and reaction times. The Leiter-R was used to evaluate cognitive abilities. Sustained attention and cognitive flexibility scores predicted the participants' reaction time and accuracy in symbol selection. Findings suggest that symbol location within AAC devices and individuals' cognitive abilities influence the speed and accuracy of retrieving symbols.
Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry
NASA Astrophysics Data System (ADS)
Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki
2015-08-01
In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
NASA Astrophysics Data System (ADS)
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
Determining the refractive index of particles using glare-point imaging technique
NASA Astrophysics Data System (ADS)
Meng, Rui; Ge, Baozhen; Lu, Qieni; Yu, Xiaoxue
2018-04-01
A method of measuring the refractive index of a particle is presented from a glare-point image. The space of a doublet image of a particle can be determined with high accuracy by using auto-correlation and Gaussian interpolation, and then the refractive index is obtained from glare-point separation, and a factor that may influence the accuracy of glare-point separation is explored. Experiments are carried out for three different kinds of particles, including polystyrene latex particles, glass beads, and water droplets, whose measuring accuracy is improved by the data fitting method. The research results show that the method presented in this paper is feasible and beneficial to applications such as spray and atmospheric composition measurements.
Exploration of the Components of Children's Reading Comprehension Using Rauding Theory.
ERIC Educational Resources Information Center
Rupley, William H.; And Others
A study explored an application of rauding theory to the developmental components that contribute to elementary-age children's reading comprehension. The relationships among cognitive power, auditory accuracy level, pronunciation (word recognition) level, rauding (comprehension) accuracy level, rauding rate (reading rate) level, and rauding…
ERIC Educational Resources Information Center
Grainger, Catherine; Williams, David M.; Lind, Sophie E.
2016-01-01
This study explored whether adults and adolescents with autism spectrum disorder (ASD) demonstrate difficulties making metacognitive judgments, specifically judgments of learning. Across two experiments, the study examined whether individuals with ASD could accurately judge whether they had learnt a piece of information (in this case word pairs).…
Exploring Proficiency-Based vs. Performance-Based Items with Elicited Imitation Assessment
ERIC Educational Resources Information Center
Cox, Troy L.; Bown, Jennifer; Burdis, Jacob
2015-01-01
This study investigates the effect of proficiency- vs. performance-based elicited imitation (EI) assessment. EI requires test-takers to repeat sentences in the target language. The accuracy at which test-takers are able to repeat sentences highly correlates with test-takers' language proficiency. However, in EI, the factors that render an item…
Do recommender systems benefit users? a modeling approach
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
2016-04-01
Recommender systems are present in many web applications to guide purchase choices. They increase sales and benefit sellers, but whether they benefit customers by providing relevant products remains less explored. While in many cases the recommended products are relevant to users, in other cases customers may be tempted to purchase the products only because they are recommended. Here we introduce a model to examine the benefit of recommender systems for users, and find that recommendations from the system can be equivalent to random draws if one always follows the recommendations and seldom purchases according to his or her own preference. Nevertheless, with sufficient information about user preferences, recommendations become accurate and an abrupt transition to this accurate regime is observed for some of the studied algorithms. On the other hand, we find that high estimated accuracy indicated by common accuracy metrics is not necessarily equivalent to high real accuracy in matching users with products. This disagreement between estimated and real accuracy serves as an alarm for operators and researchers who evaluate recommender systems merely with accuracy metrics. We tested our model with a real dataset and observed similar behaviors. Finally, a recommendation approach with improved accuracy is suggested. These results imply that recommender systems can benefit users, but the more frequently a user purchases the recommended products, the less relevant the recommended products are in matching user taste.
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
Fuzzy membership functions for analysis of high-resolution CT images of diffuse pulmonary diseases.
Almeida, Eliana; Rangayyan, Rangaraj M; Azevedo-Marques, Paulo M
2015-08-01
We propose the use of fuzzy membership functions to analyze images of diffuse pulmonary diseases (DPDs) based on fractal and texture features. The features were extracted from preprocessed regions of interest (ROIs) selected from high-resolution computed tomography images. The ROIs represent five different patterns of DPDs and normal lung tissue. A Gaussian mixture model (GMM) was constructed for each feature, with six Gaussians modeling the six patterns. Feature selection was performed and the GMMs of the five significant features were used. From the GMMs, fuzzy membership functions were obtained by a probability-possibility transformation and further statistical analysis was performed. An average classification accuracy of 63.5% was obtained for the six classes. For four of the six classes, the classification accuracy was superior to 65%, and the best classification accuracy was 75.5% for one class. The use of fuzzy membership functions to assist in pattern classification is an alternative to deterministic approaches to explore strategies for medical diagnosis.
NASA Technical Reports Server (NTRS)
Ohtakay, H.; Hardman, J. M.
1975-01-01
The X-band radio frequency communication system was used for the first time in deep space planetary exploration by the Mariner 10 Venus and Mercury flyby mission. This paper presents the technique utilized for and the results of inflight calibration of high-gain antenna (HGA) pointing. Also discussed is pointing accuracy to maintain a high data transmission rate throughout the mission, including the performance of HGA pointing during the critical period of Mercury encounter.
Current position of high-resolution MS for drug quantification in clinical & forensic toxicology.
Meyer, Markus R; Helfer, Andreas G; Maurer, Hans H
2014-08-01
This paper reviews high-resolution MS approaches published from January 2011 until March 2014 for the quantification of drugs (of abuse) and/or their metabolites in biosamples using LC-MS with time-of-flight or Orbitrap™ mass analyzers. Corresponding approaches are discussed including sample preparation and mass spectral settings. The advantages and limitations of high-resolution MS for drug quantification, as well as the demand for a certain resolution or a specific mass accuracy are also explored.
Response Latency as a Predictor of the Accuracy of Children's Reports
ERIC Educational Resources Information Center
Ackerman, Rakefet; Koriat, Asher
2011-01-01
Researchers have explored various diagnostic cues to the accuracy of information provided by child eyewitnesses. Previous studies indicated that children's confidence in their reports predicts the relative accuracy of these reports, and that the confidence-accuracy relationship generally improves as children grow older. In this study, we examined…
ERIC Educational Resources Information Center
Kafkas, Alexandros; Montaldi, Daniela
2012-01-01
Two experiments explored eye measures (fixations and pupil response patterns) and brain responses (BOLD) accompanying the recognition of visual object stimuli based on familiarity and recollection. In both experiments, the use of a modified remember/know procedure led to high confidence and matched accuracy levels characterising strong familiarity…
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
Conceptual study of Earth observation missions with a space-borne laser scanner
NASA Astrophysics Data System (ADS)
Kobayashi, Takashi; Sato, Yohei; Yamakawa, Shiro
2017-11-01
The Japan Aerospace Exploration Agency (JAXA) has started a conceptual study of earth observation missions with a space-borne laser scanner (GLS, as Global Laser Scanner). Laser scanners are systems which transmit intense pulsed laser light to the ground from an airplane or a satellite, receive the scattered light, and measure the distance to the surface from the round-trip delay time of the pulse. With scanning mechanisms, GLS can obtain high-accuracy three-dimensional (3D) information from all over the world. High-accuracy 3D information is quite useful in various areas. Currently, following applications are considered. 1. Observation of tree heights to estimate the biomass quantity. 2. Making the global elevation map with high resolution. 3. Observation of ice-sheets. This paper aims at reporting the present state of our conceptual study of the GLS. A prospective performance of the GLS for earth observation missions mentioned above.
Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.
Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A
2018-05-14
Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.
Space Launch Systems Block 1B Preliminary Navigation System Design
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Park, Thomas; Anzalone, Evan; Smith, Austin; Strickland, Dennis; Patrick, Sean
2018-01-01
NASA is currently building the Space Launch Systems (SLS) Block 1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. In parallel, NASA is also designing the Block 1B launch vehicle. The Block 1B vehicle is an evolution of the Block 1 vehicle and extends the capability of the NASA launch vehicle. This evolution replaces the Interim Cryogenic Propulsive Stage (ICPS) with the Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability, increased robustness for manned missions, and the capability to execute more demanding missions so must the SLS Integrated Navigation System evolved to support those missions. This paper describes the preliminary navigation systems design for the SLS Block 1B vehicle. The evolution of the navigation hard-ware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1B vehicle navigation system is de-signed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. Additionally, the Block 1B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and robust algorithm design, including Fault Detection, Isolation, and Recovery (FDIR) logic.
Ke, Tracy; Fan, Jianqing; Wu, Yichao
2014-01-01
This paper explores the homogeneity of coefficients in high-dimensional regression, which extends the sparsity concept and is more general and suitable for many applications. Homogeneity arises when regression coefficients corresponding to neighboring geographical regions or a similar cluster of covariates are expected to be approximately the same. Sparsity corresponds to a special case of homogeneity with a large cluster of known atom zero. In this article, we propose a new method called clustering algorithm in regression via data-driven segmentation (CARDS) to explore homogeneity. New mathematics are provided on the gain that can be achieved by exploring homogeneity. Statistical properties of two versions of CARDS are analyzed. In particular, the asymptotic normality of our proposed CARDS estimator is established, which reveals better estimation accuracy for homogeneous parameters than that without homogeneity exploration. When our methods are combined with sparsity exploration, further efficiency can be achieved beyond the exploration of sparsity alone. This provides additional insights into the power of exploring low-dimensional structures in high-dimensional regression: homogeneity and sparsity. Our results also shed lights on the properties of the fussed Lasso. The newly developed method is further illustrated by simulation studies and applications to real data. Supplementary materials for this article are available online. PMID:26085701
Double Resummation for Higgs Production
NASA Astrophysics Data System (ADS)
Bonvini, Marco; Marzani, Simone
2018-05-01
We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.
Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M
2016-01-01
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Lackner, Stacey K.
2016-01-01
The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two…
Wang, Z Q; Zhang, F G; Guo, J; Zhang, H K; Qin, J J; Zhao, Y; Ding, Z D; Zhang, Z X; Zhang, J B; Yuan, J H; Li, H L; Qu, J R
2017-03-21
Objective: To explore the value of 3.0 T MRI using multiple sequences (star VIBE+ BLADE) in evaluating the preoperative T staging for potentially resectable esophageal cancer (EC). Methods: Between April 2015 and March 2016, a total of 66 consecutive patients with endoscopically proven resectable EC underwent 3.0T MRI in the Affiliated Cancer Hospital of Zhengzhou University.Two independent readers were assigned a T staging on MRI according to the 7th edition of UICC-AJCC TNM Classification, the results of preoperative T staging were compared and analyzed with post-operative pathologic confirmation. Results: The MRI T staging of two readers were highly consistent with histopathological findings, and the sensitivity, specificity and accuracy of preoperative T staging MR imaging were also very high. Conclusion: 3.0 T MRI using multiple sequences is with high accuracy for patients of potentially resectable EC in T staging. The staging accuracy of T1, T2 and T3 is better than that of T4a. 3.0T MRI using multiple sequences could be used as a noninvasive imaging method for pre-operative T staging of EC.
McGinley, Jennifer L; Goldie, Patricia A; Greenwood, Kenneth M; Olney, Sandra J
2003-02-01
Physical therapists routinely observe gait in clinical practice. The purpose of this study was to determine the accuracy and reliability of observational assessments of push-off in gait after stroke. Eighteen physical therapists and 11 subjects with hemiplegia following a stroke participated in the study. Measurements of ankle power generation were obtained from subjects following stroke using a gait analysis system. Concurrent videotaped gait performances were observed by the physical therapists on 2 occasions. Ankle power generation at push-off was scored as either normal or abnormal using two 11-point rating scales. These observational ratings were correlated with the measurements of peak ankle power generation. A high correlation was obtained between the observational ratings and the measurements of ankle power generation (mean Pearson r=.84). Interobserver reliability was moderately high (mean intraclass correlation coefficient [ICC (2,1)]=.76). Intraobserver reliability also was high, with a mean ICC (2,1) of.89 obtained. Physical therapists were able to make accurate and reliable judgments of push-off in videotaped gait of subjects following stroke using observational assessment. Further research is indicated to explore the accuracy and reliability of data obtained with observational gait analysis as it occurs in clinical practice.
Ma, Zhenling; Wu, Xiaoliang; Yan, Li; Xu, Zhenliang
2017-01-26
With the development of space technology and the performance of remote sensors, high-resolution satellites are continuously launched by countries around the world. Due to high efficiency, large coverage and not being limited by the spatial regulation, satellite imagery becomes one of the important means to acquire geospatial information. This paper explores geometric processing using satellite imagery without ground control points (GCPs). The outcome of spatial triangulation is introduced for geo-positioning as repeated observation. Results from combining block adjustment with non-oriented new images indicate the feasibility of geometric positioning with the repeated observation. GCPs are a must when high accuracy is demanded in conventional block adjustment; the accuracy of direct georeferencing with repeated observation without GCPs is superior to conventional forward intersection and even approximate to conventional block adjustment with GCPs. The conclusion is drawn that taking the existing oriented imagery as repeated observation enhances the effective utilization of previous spatial triangulation achievement, which makes the breakthrough for repeated observation to improve accuracy by increasing the base-height ratio and redundant observation. Georeferencing tests using data from multiple sensors and platforms with the repeated observation will be carried out in the follow-up research.
Controlling an avatar by thought using real-time fMRI
NASA Astrophysics Data System (ADS)
Cohen, Ori; Koppel, Moshe; Malach, Rafael; Friedman, Doron
2014-06-01
Objective. We have developed a brain-computer interface (BCI) system based on real-time functional magnetic resonance imaging (fMRI) with virtual reality feedback. The advantage of fMRI is the relatively high spatial resolution and the coverage of the whole brain; thus we expect that it may be used to explore novel BCI strategies, based on new types of mental activities. However, fMRI suffers from a low temporal resolution and an inherent delay, since it is based on a hemodynamic response rather than electrical signals. Thus, our objective in this paper was to explore whether subjects could perform a BCI task in a virtual environment using our system, and how their performance was affected by the delay. Approach. The subjects controlled an avatar by left-hand, right-hand and leg motion or imagery. The BCI classification is based on locating the regions of interest (ROIs) related with each of the motor classes, and selecting the ROI with maximum average values online. The subjects performed a cue-based task and a free-choice task, and the analysis includes evaluation of the performance as well as subjective reports. Main results. Six subjects performed the task with high accuracy when allowed to move their fingers and toes, and three subjects achieved high accuracy using imagery alone. In the cue-based task the accuracy was highest 8-12 s after the trigger, whereas in the free-choice task the subjects performed best when the feedback was provided 6 s after the trigger. Significance. We show that subjects are able to perform a navigation task in a virtual environment using an fMRI-based BCI, despite the hemodynamic delay. The same approach can be extended to other mental tasks and other brain areas.
Strategies for high-precision Global Positioning System orbit determination
NASA Technical Reports Server (NTRS)
Lichten, Stephen M.; Border, James S.
1987-01-01
Various strategies for the high-precision orbit determination of the GPS satellites are explored using data from the 1985 GPS field test. Several refinements to the orbit determination strategies were found to be crucial for achieving high levels of repeatability and accuracy. These include the fine tuning of the GPS solar radiation coefficients and the ground station zenith tropospheric delays. Multiday arcs of 3-6 days provided better orbits and baselines than the 8-hr arcs from single-day passes. Highest-quality orbits and baselines were obtained with combined carrier phase and pseudorange solutions.
Bayesian network modelling of upper gastrointestinal bleeding
NASA Astrophysics Data System (ADS)
Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri
2013-09-01
Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Exploration of Force Myography and surface Electromyography in hand gesture classification.
Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo
2017-03-01
Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
David M. Bell; Matthew J. Gregory; Heather M. Roberts; Raymond J. Davis; Janet L. Ohmann
2015-01-01
Accuracy assessments of remote sensing products are necessary for identifying map strengths and weaknesses in scientific and management applications. However, not all accuracy assessments are created equal. Motivated by a recent study published in Forest Ecology and Management (Volume 342, pages 8â20), we explored the potential limitations of accuracy assessments...
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
Mind the gap: Increased inter-letter spacing as a means of improving reading performance.
Dotan, Shahar; Katzir, Tami
2018-06-05
Theeffects of text display, specificallywithin-word spacing, on children's reading at different developmental levels has barely been investigated.This study explored the influence of manipulating inter-letter spacing on reading performance (accuracy and rate) of beginner Hebrew readers compared with older readers and of low-achieving readers compared with age-matched high-achieving readers.A computer-based isolated word reading task was performed by 132 first and third graders. Words were displayed under two spacing conditions: standard spacing (100%) and increased spacing (150%). Words were balanced for length and frequency across conditions. Results indicated that increased spacing contributed to reading accuracy without affecting reading rate. Interestingly, all first graders benefitted fromthe spaced condition. Thiseffect was found only in long words but not in short words. Among third graders, only low-achieving readers gained in accuracy fromthespaced condition. Thetheoretical and clinical effects ofthefindings are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
Demand behavior and empathic accuracy in observed conflict interactions in couples.
Hinnekens, Céline; Ickes, William; Schryver, Maarten De; Verhofstadt, Lesley L
2016-01-01
The study reported in this research note sought to extend the research on motivated empathic accuracy by exploring whether intimate partners who are highly motivated to induce change in their partner during conflicts will be more empathically accurate than partners who are less motivated. In a laboratory experiment, the partners within 26 cohabiting couples were randomly assigned the role of conflict initiator. The partners provided questionnaire data, participated in a videotaped conflict interaction, and completed a video-review task. More blaming behavior was associated with higher levels of empathic accuracy, irrespective of whether one was the conflict initiator or not. The results also showed a two-way interaction indicating that initiators who applied more pressure on their partners to change were less empathically accurate than initiators who applied less pressure, whereas their partners could counter this pressure when they could accurately "read" the initiator's thoughts and feelings.
High School Physics Students' Personal Epistemologies and School Science Practice
NASA Astrophysics Data System (ADS)
Alpaslan, Muhammet Mustafa; Yalvac, Bugrahan; Loving, Cathleen
2017-11-01
This case study explores students' physics-related personal epistemologies in school science practices. The school science practices of nine eleventh grade students in a physics class were audio-taped over 6 weeks. The students were also interviewed to find out their ideas on the nature of scientific knowledge after each activity. Analysis of transcripts yielded several epistemological resources that students activated in their school science practice. The findings show that there is inconsistency between students' definitions of scientific theories and their epistemological judgments. Analysis revealed that students used several epistemological resources to decide on the accuracy of their data including accuracy via following the right procedure and accuracy via what the others find. Traditional, formulation-based, physics instruction might have led students to activate naive epistemological resources that prevent them to participate in the practice of science in ways that are more meaningful. Implications for future studies are presented.
Liquid electrolyte informatics using an exhaustive search with linear regression.
Sodeyama, Keitaro; Igarashi, Yasuhiko; Nakayama, Tomofumi; Tateyama, Yoshitaka; Okada, Masato
2018-06-14
Exploring new liquid electrolyte materials is a fundamental target for developing new high-performance lithium-ion batteries. In contrast to solid materials, disordered liquid solution properties have been less studied by data-driven information techniques. Here, we examined the estimation accuracy and efficiency of three information techniques, multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), and exhaustive search with linear regression (ES-LiR), by using coordination energy and melting point as test liquid properties. We then confirmed that ES-LiR gives the most accurate estimation among the techniques. We also found that ES-LiR can provide the relationship between the "prediction accuracy" and "calculation cost" of the properties via a weight diagram of descriptors. This technique makes it possible to choose the balance of the "accuracy" and "cost" when the search of a huge amount of new materials was carried out.
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study
Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.
2016-01-01
Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917
TDRSS Onboard Navigation System (TONS) experiment for the Explorer Platform (EP)
NASA Astrophysics Data System (ADS)
Gramling, C. J.; Hornstein, R. S.; Long, A. C.; Samii, M. V.; Elrod, B. D.
A TDRSS Onboard Navigation System (TONS) is currently being developed by NASA to provide a high-accuracy autonomous spacecraft navigation capability for users of TDRSS and its successor, the Advanced TDRSS. A TONS experiment will be performed in conjunction with the Explorer Platform (EP)/EUV Explorer mission to flight-qualify TONS Block I. This paper presents an overview of TDRSS on-board navigation goals and plans and the technical objectives of the TONS experiment. The operations concept of the experiment is described, including the characteristics of the ultrastable oscillator, the Doppler extractor, the signal-acquisition process, the TONS ground-support system, and the navigation flight software. A description of the on-board navigation algorithms and the rationale for their selection is also presented.
Fast retinal layer segmentation of spectral domain optical coherence tomography images
NASA Astrophysics Data System (ADS)
Zhang, Tianqiao; Song, Zhangjun; Wang, Xiaogang; Zheng, Huimin; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao
2015-09-01
An approach to segment macular layer thicknesses from spectral domain optical coherence tomography has been proposed. The main contribution is to decrease computational costs while maintaining high accuracy via exploring Kalman filtering, customized active contour, and curve smoothing. Validation on 21 normal volumes shows that 8 layer boundaries could be segmented within 5.8 s with an average layer boundary error <2.35 μm. It has been compared with state-of-the-art methods for both normal and age-related macular degeneration cases to yield similar or significantly better accuracy and is 37 times faster. The proposed method could be a potential tool to clinically quantify the retinal layer boundaries.
The High Energy Transient Explorer (HETE): Mission and Science Overview
NASA Astrophysics Data System (ADS)
Ricker, G. R.; Atteia, J.-L.; Crew, G. B.; Doty, J. P.; Fenimore, E. E.; Galassi, M.; Graziani, C.; Hurley, K.; Jernigan, J. G.; Kawai, N.; Lamb, D. Q.; Matsuoka, M.; Pizzichini, G.; Shirasaki, Y.; Tamagawa, T.; Vanderspek, R.; Vedrenne, G.; Villasenor, J.; Woosley, S. E.; Yoshida, A.
2003-04-01
The High Energy Transient Explorer (HETE ) mission is devoted to the study of gamma-ray bursts (GRBs) using soft X-ray, medium X-ray, and gamma-ray instruments mounted on a compact spacecraft. The HETE satellite was launched into equatorial orbit on 9 October 2000. A science team from France, Japan, Brazil, India, Italy, and the US is responsible for the HETE mission, which was completed for ~ 1/3 the cost of a NASA Small Explorer (SMEX). The HETE mission is unique in that it is entirely ``self-contained,'' insofar as it relies upon dedicated tracking, data acquisition, mission operations, and data analysis facilities run by members of its international Science Team. A powerful feature of HETE is its potential for localizing GRBs within seconds of the trigger with good precision (~ 10') using medium energy X-rays and, for a subset of bright GRBs, improving the localization to ~ 30''accuracy using low energy X-rays. Real-time GRB localizations are transmitted to ground observers within seconds via a dedicated network of 14 automated ``Burst Alert Stations,'' thereby allowing prompt optical, IR, and radio follow-up, leading to the identification of counterparts for a large fraction of HETE -localized GRBs. HETE is the only satellite that can provide near-real time localizations of GRBs, and that can localize GRBs that do not have X-ray, optical, and radio afterglows, during the next two years. These capabilities are the key to allowing HETE to probe further the unique physics that produces the brightest known photon sources in the universe. To date (December 2002), HETE has produced 31 GRB localizations. Localization accuracies are routinely in the 4'- 20' range; for the five GRBs with SXC localization, accuracies are ~1-2'. In addition, HETE has detected ~ 25 bursts from soft gamma repeaters (SGRs), and >600 X-ray bursts (XRBs).
Task-Based Variability in Children's Singing Accuracy
ERIC Educational Resources Information Center
Nichols, Bryan E.
2013-01-01
The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…
Concept Mapping Improves Metacomprehension Accuracy among 7th Graders
ERIC Educational Resources Information Center
Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2012-01-01
Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
Stereotype Accuracy: Toward Appreciating Group Differences.
ERIC Educational Resources Information Center
Lee, Yueh-Ting, Ed.; And Others
The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Zimmermann, N.E.; Edwards, T.C.; Moisen, Gretchen G.; Frescino, T.S.; Blackard, J.A.
2007-01-01
1. Compared to bioclimatic variables, remote sensing predictors are rarely used for predictive species modelling. When used, the predictors represent typically habitat classifications or filters rather than gradual spectral, surface or biophysical properties. Consequently, the full potential of remotely sensed predictors for modelling the spatial distribution of species remains unexplored. Here we analysed the partial contributions of remotely sensed and climatic predictor sets to explain and predict the distribution of 19 tree species in Utah. We also tested how these partial contributions were related to characteristics such as successional types or species traits. 2. We developed two spatial predictor sets of remotely sensed and topo-climatic variables to explain the distribution of tree species. We used variation partitioning techniques applied to generalized linear models to explore the combined and partial predictive powers of the two predictor sets. Non-parametric tests were used to explore the relationships between the partial model contributions of both predictor sets and species characteristics. 3. More than 60% of the variation explained by the models represented contributions by one of the two partial predictor sets alone, with topo-climatic variables outperforming the remotely sensed predictors. However, the partial models derived from only remotely sensed predictors still provided high model accuracies, indicating a significant correlation between climate and remote sensing variables. The overall accuracy of the models was high, but small sample sizes had a strong effect on cross-validated accuracies for rare species. 4. Models of early successional and broadleaf species benefited significantly more from adding remotely sensed predictors than did late seral and needleleaf species. The core-satellite species types differed significantly with respect to overall model accuracies. Models of satellite and urban species, both with low prevalence, benefited more from use of remotely sensed predictors than did the more frequent core species. 5. Synthesis and applications. If carefully prepared, remotely sensed variables are useful additional predictors for the spatial distribution of trees. Major improvements resulted for deciduous, early successional, satellite and rare species. The ability to improve model accuracy for species having markedly different life history strategies is a crucial step for assessing effects of global change. ?? 2007 The Authors.
ZIMMERMANN, N E; EDWARDS, T C; MOISEN, G G; FRESCINO, T S; BLACKARD, J A
2007-01-01
Compared to bioclimatic variables, remote sensing predictors are rarely used for predictive species modelling. When used, the predictors represent typically habitat classifications or filters rather than gradual spectral, surface or biophysical properties. Consequently, the full potential of remotely sensed predictors for modelling the spatial distribution of species remains unexplored. Here we analysed the partial contributions of remotely sensed and climatic predictor sets to explain and predict the distribution of 19 tree species in Utah. We also tested how these partial contributions were related to characteristics such as successional types or species traits. We developed two spatial predictor sets of remotely sensed and topo-climatic variables to explain the distribution of tree species. We used variation partitioning techniques applied to generalized linear models to explore the combined and partial predictive powers of the two predictor sets. Non-parametric tests were used to explore the relationships between the partial model contributions of both predictor sets and species characteristics. More than 60% of the variation explained by the models represented contributions by one of the two partial predictor sets alone, with topo-climatic variables outperforming the remotely sensed predictors. However, the partial models derived from only remotely sensed predictors still provided high model accuracies, indicating a significant correlation between climate and remote sensing variables. The overall accuracy of the models was high, but small sample sizes had a strong effect on cross-validated accuracies for rare species. Models of early successional and broadleaf species benefited significantly more from adding remotely sensed predictors than did late seral and needleleaf species. The core-satellite species types differed significantly with respect to overall model accuracies. Models of satellite and urban species, both with low prevalence, benefited more from use of remotely sensed predictors than did the more frequent core species. Synthesis and applications. If carefully prepared, remotely sensed variables are useful additional predictors for the spatial distribution of trees. Major improvements resulted for deciduous, early successional, satellite and rare species. The ability to improve model accuracy for species having markedly different life history strategies is a crucial step for assessing effects of global change. PMID:18642470
Sex discrimination potential of buccolingual and mesiodistal tooth dimensions.
Acharya, Ashith B; Mainali, Sneedha
2008-07-01
Tooth crown dimensions are reasonably accurate predictors of sex and are useful adjuncts in sex assessment. This study explores the utility of buccolingual (BL) and mesiodistal (MD) measurements in sex differentiation when used independently. BL and MD measurements of 28 teeth (third molars excluded) were obtained from a group of 53 Nepalese subjects (22 women and 31 men) aged 19-28 years. Stepwise discriminant analyses were undertaken separately for both types of tooth crown variables and their accuracy in sex classification compared with one another. MD dimensions had recognizably greater accuracy (77.4-83%) in sex identification than BL measurements (62.3-64.2%)--results that are consistent with previous reports. However, the accuracy of MD variables is not high enough to warrant their exclusive use in odontometric sex assessment--higher accuracy levels have been obtained when both types of dimensions were used concurrently, implying that BL variables contribute to sex assessment to some extent. Hence, it is inferred that optimal results in dental sex assessment are obtained when both MD and BL variables are used together.
Evaluating Rater Accuracy in Rater-Mediated Assessments Using an Unfolding Model
ERIC Educational Resources Information Center
Wang, Jue; Engelhard, George, Jr.; Wolfe, Edward W.
2016-01-01
The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy…
The Accuracy of Gender Stereotypes Regarding Occupations.
ERIC Educational Resources Information Center
Beyer, Sylvia; Finnegan, Andrea
Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…
Development of high-accuracy convection schemes for sequential solvers
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
An exploration is conducted of the applicability of such high resolution schemes as TVD to the resolving of sharp flow gradients using a sequential solution approach borrowed from pressure-based algorithms. It is shown that by extending these high-resolution shock-capturing schemes to a sequential solver that treats the equations as a collection of scalar conservation equations, the speed of signal propagation in the solution has to be coordinated by assigning the local convection speed as the characteristic speed for the entire system. A higher amount of dissipation is therefore needed to eliminate oscillations near discontinuities.
Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping
NASA Astrophysics Data System (ADS)
Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta
2012-10-01
A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.
Wade, Ryckie G; Itte, Vinay; Rankine, James J; Ridgway, John P; Bourke, Grainne
2018-03-01
Identification of root avulsions is of critical importance in traumatic brachial plexus injuries because it alters the reconstruction and prognosis. Pre-operative magnetic resonance imaging is gaining popularity, but there is limited and conflicting data on its diagnostic accuracy for root avulsion. This cohort study describes consecutive patients requiring brachial plexus exploration following trauma between 2008 and 2016. The index test was magnetic resonance imaging at 1.5 Tesla and the reference test was operative exploration of the supraclavicular plexus. Complete data from 29 males was available. The diagnostic accuracy of magnetic resonance imaging for root avulsion(s) of C5-T1 was 79%. The diagnostic accuracy of a pseudomeningocoele as a surrogate marker of root avulsion(s) of C5-T1 was 68%. We conclude that pseudomeningocoles were not a reliable sign of root avulsion and magnetic resonance imaging has modest diagnostic accuracy for root avulsions in the context of adult traumatic brachial plexus injuries. III.
Mars Atmospheric Characterization Using Advanced 2-Micron Orbiting Lidar
NASA Technical Reports Server (NTRS)
Singh, U.; Engelund, W.; Refaat, T.; Kavaya, M.; Yu, J.; Petros, M.
2015-01-01
Mars atmospheric characterization is critical for exploring the planet. Future Mars missions require landing massive payloads to the surface with high accuracy. The accuracy of entry, descent and landing (EDL) of a payload is a major technical challenge for future Mars missions. Mars EDL depends on atmospheric conditions such as density, wind and dust as well as surface topography. A Mars orbiting 2-micron lidar system is presented in this paper. This advanced lidar is capable of measuring atmospheric pressure and temperature profiles using the most abundant atmospheric carbon dioxide (CO2) on Mars. In addition Martian winds and surface altimetry can be mapped, independent of background radiation or geographical location. This orbiting lidar is a valuable tool for developing EDL models for future Mars missions.
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.
Yang, Wan; Olson, Donald R; Shaman, Jeffrey
2016-11-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Li, Z.; Ng, C.
The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less
Effectiveness of Link Prediction for Face-to-Face Behavioral Networks
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30–0.45 and a recall of 0.10–0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks. PMID:24339956
DeWitt, Jessica D.; Warner, Timothy A.; Chirico, Peter G.; Bergstresser, Sarah E.
2017-01-01
For areas of the world that do not have access to lidar, fine-scale digital elevation models (DEMs) can be photogrammetrically created using globally available high-spatial resolution stereo satellite imagery. The resultant DEM is best termed a digital surface model (DSM) because it includes heights of surface features. In densely vegetated conditions, this inclusion can limit its usefulness in applications requiring a bare-earth DEM. This study explores the use of techniques designed for filtering lidar point clouds to mitigate the elevation artifacts caused by above ground features, within the context of a case study of Prince William Forest Park, Virginia, USA. The influences of land cover and leaf-on vs. leaf-off conditions are investigated, and the accuracy of the raw photogrammetric DSM extracted from leaf-on imagery was between that of a lidar bare-earth DEM and the Shuttle Radar Topography Mission DEM. Although the filtered leaf-on photogrammetric DEM retains some artifacts of the vegetation canopy and may not be useful for some applications, filtering procedures significantly improved the accuracy of the modeled terrain. The accuracy of the DSM extracted in leaf-off conditions was comparable in most areas to the lidar bare-earth DEM and filtering procedures resulted in accuracy comparable of that to the lidar DEM.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces
Sellers, Eric W.; Wang, Xingyu
2013-01-01
Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331
NASA Technical Reports Server (NTRS)
1985-01-01
A photogeologic and remote sensing model of porphyry type mineral sytems is considered along with a Landsat application to development of a tectonic model for hydrocarbon exploration of Devonian shales in west-central Virginia, remote sensing and the funnel philosophy, Landsat-based tectonic and metallogenic synthesis of the southwest United States, and an evolving paradigm for computer vision. Attention is given to the neotectonics of the Tibetan plateau deduced from Landsat MSS image interpretation, remote sensing in northern Arizona, the use of an airborne laser system for vegetation inventories and geobotanical prospecting, an evaluation of Thematic Mapper data for hydrocarbon exploration in low-relief basins, and an evaluation of the information content of high spectral resolution imagery. Other topics explored are related to a major source of new radar data for exploration research, the accuracy of geologic maps produced from Landsat data, and an approach for the geometric rectification of radar imagery.
Uncooled radiometric camera performance
NASA Astrophysics Data System (ADS)
Meyer, Bill; Hoelter, T.
1998-07-01
Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.
Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.
2013-01-01
Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805
Nendaz, Mathieu R; Gut, Anne M; Perrier, Arnaud; Louis-Simonet, Martine; Blondon-Choa, Katherine; Herrmann, François R; Junod, Alain F; Vu, Nu V
2006-01-01
BACKGROUND Clinical experience, features of data collection process, or both, affect diagnostic accuracy, but their respective role is unclear. OBJECTIVE, DESIGN Prospective, observational study, to determine the respective contribution of clinical experience and data collection features to diagnostic accuracy. METHODS Six Internists, 6 second year internal medicine residents, and 6 senior medical students worked up the same 7 cases with a standardized patient. Each encounter was audiotaped and immediately assessed by the subjects who indicated the reasons underlying their data collection. We analyzed the encounters according to diagnostic accuracy, information collected, organ systems explored, diagnoses evaluated, and final decisions made, and we determined predictors of diagnostic accuracy by logistic regression models. RESULTS Several features significantly predicted diagnostic accuracy after correction for clinical experience: early exploration of correct diagnosis (odds ratio [OR] 24.35) or of relevant diagnostic hypotheses (OR 2.22) to frame clinical data collection, larger number of diagnostic hypotheses evaluated (OR 1.08), and collection of relevant clinical data (OR 1.19). CONCLUSION Some features of data collection and interpretation are related to diagnostic accuracy beyond clinical experience and should be explicitly included in clinical training and modeled by clinical teachers. Thoroughness in data collection should not be considered a privileged way to diagnostic success. PMID:17105525
Estis, Julie M; Dean-Claytor, Ashli; Moore, Robert E; Rowell, Thomas L
2011-03-01
The effects of musical interference and noise on pitch-matching accuracy were examined. Vocal training was explored as a factor influencing pitch-matching accuracy, and the relationship between pitch matching and pitch discrimination was examined. Twenty trained singers (TS) and 20 untrained individuals (UT) vocally matched tones in six conditions (immediate, four types of chords, noise). Fundamental frequencies were calculated, compared with the frequency of the target tone, and converted to semitone difference scores. A pitch discrimination task was also completed. TS showed significantly better pitch matching than UT across all conditions. Individual performances for UT were highly variable. Therefore, untrained participants were divided into two groups: 10 untrained accurate and 10 untrained inaccurate. Comparison of TS with untrained accurate individuals revealed significant differences between groups and across conditions. Compared with immediate vocal matching of target tones, pitch-matching accuracy was significantly reduced, given musical chord and noise interference unless the target tone was presented in the musical chord. A direct relationship between pitch matching and pitch discrimination was revealed. Across pitch-matching conditions, TS were consistently more accurate than UT. Pitch-matching accuracy diminished when auditory interference consisted of chords that did not contain the target tone and noise. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
ERIC Educational Resources Information Center
Perfect, Timothy J.; Weber, Nathan
2012-01-01
Explorations of memory accuracy control normally contrast forced-report with free-report performance across a set of items and show a trade-off between memory quantity and accuracy. However, this memory control framework has not been tested with lineup identifications that may involve rejection of all alternatives. A large-scale (N = 439) lineup…
Mining HIV protease cleavage data using genetic programming with a sum-product function.
Yang, Zheng Rong; Dalby, Andrew R; Qiu, Jing
2004-12-12
In order to design effective HIV inhibitors, studying and understanding the mechanism of HIV protease cleavage specification is critical. Various methods have been developed to explore the specificity of HIV protease cleavage activity. However, success in both extracting discriminant rules and maintaining high prediction accuracy is still challenging. The earlier study had employed genetic programming with a min-max scoring function to extract discriminant rules with success. However, the decision will finally be degenerated to one residue making further improvement of the prediction accuracy difficult. The challenge of revising the min-max scoring function so as to improve the prediction accuracy motivated this study. This paper has designed a new scoring function called a sum-product function for extracting HIV protease cleavage discriminant rules using genetic programming methods. The experiments show that the new scoring function is superior to the min-max scoring function. The software package can be obtained by request to Dr Zheng Rong Yang.
Performance of the Micropower Voltage Reference ADR3430 Under Extreme Temperatures
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad
2011-01-01
Electronic systems designed for use in space exploration systems are expected to be exposed to harsh temperatures. For example, operation at cryogenic temperatures is anticipated in space missions such as polar craters of the moon (-223 C), James Webb Space Telescope (-236 C), Mars (-140 C), Europa (-223 C), Titan (-178 C), and other deep space probes away from the sun. Similarly, rovers and landers on the lunar surface, and deep space probes intended for the exploration of Venus are expected to encounter high temperature extremes. Electronics capable of operation under extreme temperatures would not only meet the requirements of future spacebased systems, but would also contribute to enhancing efficiency and improving reliability of these systems through the elimination of the thermal control elements that present electronics need for proper operation under the harsh environment of space. In this work, the performance of a micropower, high accuracy voltage reference was evaluated over a wide temperature range. The Analog Devices ADR3430 chip uses a patented voltage reference architecture to achieve high accuracy, low temperature coefficient, and low noise in a CMOS process [1]. The device combines two voltages of opposite temperature coefficients to create an output voltage that is almost independent of ambient temperature. It is rated for the industrial temperature range of -40 C to +125 C, and is ideal for use in low power precision data acquisition systems and in battery-powered devices. Table 1 shows some of the manufacturer s device specifications.
A detector interferometric calibration experiment for high precision astrometry
NASA Astrophysics Data System (ADS)
Crouzier, A.; Malbet, F.; Henault, F.; Léger, A.; Cara, C.; LeDuigou, J. M.; Preis, O.; Kern, P.; Delboulbe, A.; Martin, G.; Feautrier, P.; Stadler, E.; Lafrasse, S.; Rochat, S.; Ketchazo, C.; Donati, M.; Doumayrou, E.; Lagage, P. O.; Shao, M.; Goullioud, R.; Nemati, B.; Zhai, C.; Behar, E.; Potin, S.; Saint-Pe, M.; Dupont, J.
2016-11-01
Context. Exoplanet science has made staggering progress in the last two decades, due to the relentless exploration of new detection methods and refinement of existing ones. Yet astrometry offers a unique and untapped potential of discovery of habitable-zone low-mass planets around all the solar-like stars of the solar neighborhood. To fulfill this goal, astrometry must be paired with high precision calibration of the detector. Aims: We present a way to calibrate a detector for high accuracy astrometry. An experimental testbed combining an astrometric simulator and an interferometric calibration system is used to validate both the hardware needed for the calibration and the signal processing methods. The objective is an accuracy of 5 × 10-6 pixel on the location of a Nyquist sampled polychromatic point spread function. Methods: The interferometric calibration system produced modulated Young fringes on the detector. The Young fringes were parametrized as products of time and space dependent functions, based on various pixel parameters. The minimization of function parameters was done iteratively, until convergence was obtained, revealing the pixel information needed for the calibration of astrometric measurements. Results: The calibration system yielded the pixel positions to an accuracy estimated at 4 × 10-4 pixel. After including the pixel position information, an astrometric accuracy of 6 × 10-5 pixel was obtained, for a PSF motion over more than five pixels. In the static mode (small jitter motion of less than 1 × 10-3 pixel), a photon noise limited precision of 3 × 10-5 pixel was reached.
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
Spatiotemporal Bayesian networks for malaria prediction.
Haddawy, Peter; Hasan, A H M Imrul; Kasantikul, Rangwan; Lawpoolsri, Saranath; Sa-Angchai, Patiwat; Kaewkungwal, Jaranit; Singhasivanon, Pratap
2018-01-01
Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
Accuracy of tracking forest machines with GPS
M.W. Veal; S.E. Taylor; T.P. McDonald; D.K. McLemore; M.R. Dunn
2001-01-01
This paper describes the results of a study that measured the accuracy of using GPS to track movement of forest machines. Two different commercially available GPS receivers (Trimble ProXR and GeoExplorer II) were used to track\\r\
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Color preference and familiarity in performance on brand logo recall.
Huang, Kuo-Chen; Lin, Chin-Chiuan; Chiang, Shu-Ying
2008-10-01
Two experiments assessed effects of color preference and brand-logo familiarity on recall performance. Exp. 1 explored the color preferences, using a forced-choice technique, of 189 women and 63 men, Taiwanese college students ages 18 to 20 years (M = 19.4, SD = 1.5). The sequence of the three most preferred colors was white, light blue, and black and of the three least preferred colors was light orange, dark violet, and dark brown. Exp. 2 investigated the effects of color preference based on the results of Exp. 1 and brand-logo familiarity on recall. A total of 27 women and 21 men, Taiwanese college students ages 18 to 20 years (M = 19.2, SD = 1.2) participated. They memorized a list of 24 logos (four logos shown in six colors) and then performed sequential recall. Analyses showed color preference significantly affected recall accuracy. Accuracy for high color preference was significantly greater than that for low preferences. Results showed no significant effects of brand-logo familiarity or sex on accuracy. In addition, the interactive effect of color preference and brand-logo familiarity on accuracy was significant. These results have implications for the design of brand logos to create and sustain memory of brand images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yufeng; Tolic, Nikola; Purvine, Samuel O.
2011-11-07
The peptidome (i.e. processed and degraded forms of proteins) of e.g. blood can potentially provide insights into disease processes, as well as a source of candidate biomarkers that are unobtainable using conventional bottom-up proteomics approaches. MS dissociation methods, including CID, HCD, and ETD, can each contribute distinct identifications using conventional peptide identification methods (Shen et al. J. Proteome Res. 2011), but such samples still pose significant analysis and informatics challenges. In this work, we explored a simple approach for better utilization of high accuracy fragment ion mass measurements provided e.g. by FT MS/MS and demonstrate significant improvements relative to conventionalmore » descriptive and probabilistic scores methods. For example, at the same FDR level we identified 20-40% more peptides than SEQUEST and Mascot scoring methods using high accuracy fragment ion information (e.g., <10 mass errors) from CID, HCD, and ETD spectra. Species identified covered >90% of all those identified from SEQUEST, Mascot, and MS-GF scoring methods. Additionally, we found that the merging the different fragment spectra provided >60% more species using the UStags method than achieved previously, and enabled >1000 peptidome components to be identified from a single human blood plasma sample with a 0.6% peptide-level FDR, and providing an improved basis for investigation of potentially disease-related peptidome components.« less
A new topology and control method for electromagnetic transmitter power supplies
NASA Astrophysics Data System (ADS)
Zhang, Yiming; Zhang, Jialin; Yuan, Dakang
2017-04-01
As essential equipment for electromagnetic exploration, electromagnetic transmitter reverse the steady power supply with desired frequency and transmit the power through grounding electrodes. To obtain effective geophysical data during deep exploration, the transmitter needs to be high-voltage, high-current, with high-accuracy output, and yet compact and light. The researches on the power supply technologies for high-voltage high-power electromagnetic transmitter is of significant importance to the deep geophysical explorations. Therefore, the performance of electromagnetic transmitter is mainly subject to the following two aspects: the performance of emission current and voltage, and the power density. These requirements bring technical difficulties to the development of power supplies. Conventionally, high-frequency switching power supplies are applied in the design of a high-power transmitter power supply. However, the structure of the topology is complicate, which may reduce the controllability of the output voltage and the reliability of the system. Without power factor control, the power factor of the structure is relatively low. Moreover high switching frequency causes high loss. With the development of the PWM (pulse width modulation) technique, its merits of simple structure, low loss, convenient control and unit power factor have made it popular in electrical energy feedback, active filter, and power factor compensation. Studies have shown that using PWM converters and space vector modulation have become the trend in designing transmitter power supply. However, the earth load exhibits different impedances at different frequencies. Thus ensuing high-accuracy and a stable output from a transmitter power supply in harsh environment has become a key topic in the design of geophysical exploration instruments. Based on SVPWM technology, an electromagnetic transmitter power supply has been designed and its control strategy has been studied. The transmitting system is composed of power supply, SVPWM converter, and power inverter units. The functions of the units are as follows: (1) power supply: a generator providing power with three phase; (2) SVPWM converter: convert AC to DC output; (3) power inverter unit: the inverter is used to convert DC to AC output whose frequency, amplitude and waveform are variable. In the SVPWM technique, the active current and the reactive current are controlled separately, and each variable is analyzed individually, thus the power factor of the system is improved. Through controlling the PWM converter at the generation side, we can get any power factor. Usually the power factor of the generation side is set to 1. Finally, simulation and experimental results validate both the correctness of the established model and the effectiveness of the control method. We can acquire unity power factor for the input and steady current for the output. They also demonstrated that the electromagnetic transmitter power supply designed in this study can meet the practical needs of field geological exploration. We can improve the utilization of the transmitter system.
Pous-Serrano, S; Frasson, M; Palasí Giménez, R; Sanchez-Jordá, G; Pamies-Guilabert, J; Llavador Ros, M; Nos Mateu, P; Garcia-Granero, E
2017-05-01
To assess the accuracy of magnetic resonance enterography in predicting the extension, location and characteristics of the small bowel segments affected by Crohn's disease. This is a prospective study including a consecutive series of 38 patients with Crohn's disease of the small bowel who underwent surgery at a specialized colorectal unit of a tertiary hospital. Preoperative magnetic resonance enterography was performed in all patients, following a homogeneous protocol, within the 3 months prior to surgery. A thorough exploration of the small bowel was performed during the surgical procedure; calibration spheres were used according to the discretion of the surgeon. The accuracy of magnetic resonance enterography in detecting areas affected by Crohn's disease in the small bowel was assessed. The findings of magnetic resonance enterography were compared with surgical and pathological findings. Thirty-eight patients with 81 lesions were included in the study. During surgery, 12 lesions (14.8%) that were not described on magnetic resonance enterography were found. Seven of these were detected exclusively by the use of calibration spheres, passing unnoticed at surgical exploration. Magnetic resonance enterography had 90% accuracy in detecting the location of the stenosis (75.0% sensitivity, 95.7% specificity). Magnetic resonance enterography did not precisely diagnose the presence of an inflammatory phlegmon (accuracy 46.2%), but it was more accurate in detecting abscesses or fistulas (accuracy 89.9% and 98.6%, respectively). Magnetic resonance enterography is a useful tool in the preoperative assessment of patients with Crohn's disease. However, a thorough intra-operative exploration of the entire small bowel is still necessary. Colorectal Disease © 2017 The Association of Coloproctology of Great Britain and Ireland.
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
NASA Astrophysics Data System (ADS)
Aumann, T.; Bertulani, C. A.; Schindler, F.; Typel, S.
2017-12-01
An experimentally constrained equation of state of neutron-rich matter is fundamental for the physics of nuclei and the astrophysics of neutron stars, mergers, core-collapse supernova explosions, and the synthesis of heavy elements. To this end, we investigate the potential of constraining the density dependence of the symmetry energy close to saturation density through measurements of neutron-removal cross sections in high-energy nuclear collisions of 0.4 to 1 GeV /nucleon . We show that the sensitivity of the total neutron-removal cross section is high enough so that the required accuracy can be reached experimentally with the recent developments of new detection techniques. We quantify two crucial points to minimize the model dependence of the approach and to reach the required accuracy: the contribution to the cross section from inelastic scattering has to be measured separately in order to allow a direct comparison of experimental cross sections to theoretical cross sections based on density functional theory and eikonal theory. The accuracy of the reaction model should be investigated and quantified by the energy and target dependence of various nucleon-removal cross sections. Our calculations explore the dependence of neutron-removal cross sections on the neutron skin of medium-heavy neutron-rich nuclei, and we demonstrate that the slope parameter L of the symmetry energy could be constrained down to ±10 MeV by such a measurement, with a 2% accuracy of the measured and calculated cross sections.
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City
2016-01-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast. PMID:27855155
Overestimation of threat from neutral faces and voices in social anxiety.
Peschard, Virginie; Philippot, Pierre
2017-12-01
Social anxiety (SA) is associated with a tendency to interpret social information in a more threatening manner. Most of the research in SA has focused on unimodal exploration (mostly based on facial expressions), thus neglecting the ubiquity of cross-modality. To fill this gap, the present study sought to explore whether SA influences the interpretation of facial and vocal expressions presented separately or jointly. Twenty-five high socially anxious (HSA) and 29 low socially anxious (LSA) participants completed a forced two-choice emotion identification task consisting of angry and neutral expressions conveyed by faces, voices or combined faces and voices. Participants had to identify the emotion (angry or neutral) of the presented cues as quickly and precisely as possible. Our results showed that, compared to LSA, HSA individuals show a higher propensity to misattribute anger to neutral expressions independent of cue modality and despite preserved decoding accuracy. We also found a cross-modal facilitation effect at the level of accuracy (i.e., higher accuracy in the bimodal condition compared to unimodal ones). However, such effect was not moderated by SA. Although the HSA group showed clinical cut-off scores at the Liebowitz Social Anxiety Scale, one limitation is that we did not administer diagnostic interviews. Upcoming studies may want to test whether these results can be generalized to a clinical population. These findings highlight the usefulness of a cross-modal perspective to probe the specificity of biases in SA. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.
Mohammed, Nazmi A; Elkarim, Mohammed Abd
2015-08-10
This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.
A comparison of haptic material perception in blind and sighted individuals.
Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R
2015-10-01
We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Coupled Integration of CSAC, MIMU, and GNSS for Improved PNT Performance
Ma, Lin; You, Zheng; Liu, Tianyi; Shi, Shuai
2016-01-01
Positioning, navigation, and timing (PNT) is a strategic key technology widely used in military and civilian applications. Global navigation satellite systems (GNSS) are the most important PNT techniques. However, the vulnerability of GNSS threatens PNT service quality, and integrations with other information are necessary. A chip scale atomic clock (CSAC) provides high-precision frequency and high-accuracy time information in a short time. A micro inertial measurement unit (MIMU) provides a strap-down inertial navigation system (SINS) with rich navigation information, better real-time feed, anti-jamming, and error accumulation. This study explores the coupled integration of CSAC, MIMU, and GNSS to enhance PNT performance. The architecture of coupled integration is designed and degraded when any subsystem fails. A mathematical model for a precise time aiding navigation filter is derived rigorously. The CSAC aids positioning by weighted linear optimization when the visible satellite number is four or larger. By contrast, CSAC converts the GNSS observations to range measurements by “clock coasting” when the visible satellite number is less than four, thereby constraining the error divergence of micro inertial navigation and improving the availability of GNSS signals and the positioning accuracy of the integration. Field vehicle experiments, both in open-sky area and in a harsh environment, show that the integration can improve the positioning probability and accuracy. PMID:27187399
Coupled Integration of CSAC, MIMU, and GNSS for Improved PNT Performance.
Ma, Lin; You, Zheng; Liu, Tianyi; Shi, Shuai
2016-05-12
Positioning, navigation, and timing (PNT) is a strategic key technology widely used in military and civilian applications. Global navigation satellite systems (GNSS) are the most important PNT techniques. However, the vulnerability of GNSS threatens PNT service quality, and integrations with other information are necessary. A chip scale atomic clock (CSAC) provides high-precision frequency and high-accuracy time information in a short time. A micro inertial measurement unit (MIMU) provides a strap-down inertial navigation system (SINS) with rich navigation information, better real-time feed, anti-jamming, and error accumulation. This study explores the coupled integration of CSAC, MIMU, and GNSS to enhance PNT performance. The architecture of coupled integration is designed and degraded when any subsystem fails. A mathematical model for a precise time aiding navigation filter is derived rigorously. The CSAC aids positioning by weighted linear optimization when the visible satellite number is four or larger. By contrast, CSAC converts the GNSS observations to range measurements by "clock coasting" when the visible satellite number is less than four, thereby constraining the error divergence of micro inertial navigation and improving the availability of GNSS signals and the positioning accuracy of the integration. Field vehicle experiments, both in open-sky area and in a harsh environment, show that the integration can improve the positioning probability and accuracy.
van der Esch, M; Knoop, J; Hunter, D J; Klein, J-P; van der Leeden, M; Knol, D L; Reiding, D; Voorneman, R E; Gerritsen, M; Roorda, L D; Lems, W F; Dekker, J
2013-05-01
Osteoarthritis (OA) of the knee is characterized by pain and activity limitations. In knee OA, proprioceptive accuracy is reduced and might be associated with pain and activity limitations. Although causes of reduced proprioceptive accuracy are divergent, medial meniscal abnormalities, which are highly prevalent in knee OA, have been suggested to play an important role. No study has focussed on the association between proprioceptive accuracy and meniscal abnormalities in knee OA. To explore the association between reduced proprioceptive accuracy and medial meniscal abnormalities in a clinical sample of knee OA subjects. Cross-sectional study in 105 subjects with knee OA. Knee proprioceptive accuracy was assessed by determining the joint motion detection threshold in the knee extension direction. The knee was imaged with a 3.0 T magnetic resonance (MR) scanner. Number of regions with medial meniscal abnormalities and the extent of abnormality in the anterior and posterior horn and body were scored according to the Boston-Leeds Osteoarthritis Knee Score (BLOKS) method. Multiple regression analyzes were used to examine whether reduced proprioceptive accuracy was associated with medial meniscal abnormalities in knee OA subjects. Mean proprioceptive accuracy was 2.9° ± 1.9°. Magnetic resonance imaging (MRI)-detected medial meniscal abnormalities were found in the anterior horn (78%), body (80%) and posterior horn (90%). Reduced proprioceptive accuracy was associated with both the number of regions with meniscal abnormalities (P < 0.01) and the extent of abnormality (P = 0.02). These associations were not confounded by muscle strength, joint laxity, pain, age, gender, body mass index (BMI) and duration of knee complaints. This is the first study showing that reduced proprioceptive accuracy is associated with medial meniscal abnormalities in knee OA. The study highlights the importance of meniscal abnormalities in understanding reduced proprioceptive accuracy in persons with knee OA. Copyright © 2013 Osteoarthritis Research Society International. All rights reserved.
Sadikaj, Gentiana; Moskowitz, D S; Zuroff, David C
2015-08-01
High intrapersonal variability has frequently been found to be related to poor personal and interpersonal outcomes. Little research has examined processes by which intrapersonal variability influences outcomes. This study explored the relation of intrapersonal variability in negative affect (negative affect flux) to accuracy and bias in the perception of a romantic partner's quarrelsome behavior. A sample of 93 cohabiting couples participated in a study using an event-contingent recording (ECR) methodology in which they reported their negative affect, quarrelsome behavior, and perception of their partner's quarrelsome behavior in interactions with each other during a 20-day period. Negative affect flux was operationalized as the within-person standard deviation of negative affect scores across couple interactions. Findings suggested that participants were both accurate in tracking changes in their partner's quarrelsome behavior and biased in assuming their partner's quarrelsome behavior mirrored their own quarrelsome behavior. Negative affect flux moderated both accuracy and bias of assumed similarity such that participants with higher flux manifested both greater tracking accuracy and larger bias of assumed similarity. Negative affect flux may be related to enhanced vigilance to close others' negative behavior, which may explain higher tracking accuracy and propensity to rely on a person's own negative behavior as a means of judging others' negative behavior. These processes may augment these individuals' negative interpersonal behavior, enhance cycles of negative social interactions, and lead to poor intrapersonal and interpersonal outcomes.
Autonomous Navigation With Ground Station One-Way Forward-Link Doppler Data
NASA Technical Reports Server (NTRS)
Horstkamp, G. M.; Niklewski, D. J.; Gramling, C. J.
1996-01-01
The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) has spent several years developing operational onboard navigation systems (ONS's) to provide real time autonomous, highly accurate navigation products for spacecraft using NASA's space and ground communication systems. The highly successful Tracking and Data Relay Satellite (TDRSS) ONS (TONS) experiment on the Explorer Platform/Extreme Ultraviolet (EP/EUV) spacecraft, launched on June 7, 1992, flight demonstrated the ONS for high accuracy navigation using TDRSS forward link communication services. In late 1994, a similar ONS experiment was performed using EP/EUV flight hardware (the ultrastable oscillator and Doppler extractor card in one of the TDRSS transponders) and ground system software to demonstrate the feasibility of using an ONS with ground station forward link communication services. This paper provides a detailed evaluation of ground station-based ONS performance of data collected over a 20 day period. The ground station ONS (GONS) experiment results are used to project the expected performance of an operational system. The GONS processes Doppler data derived from scheduled ground station forward link services using a sequential estimation algorithm enhanced by a sophisticated process noise model to provide onboard orbit and frequency determination. Analysis of the GONS experiment performance indicates that real time onboard position accuracies of better than 125 meters (1 sigma) are achievable with two or more 5-minute contacts per day for the EP/EUV 525 kilometer altitude, 28.5 degree inclination orbit. GONS accuracy is shown to be a function of the fidelity of the onboard propagation model, the frequency/geometry of the tracking contacts, and the quality of the tracking measurements. GONS provides a viable option for using autonomous navigation to reduce operational costs for upcoming spacecraft missions with moderate position accuracy requirements.
Challenges and Issues of Radiation Damage Tools for Space Missions
NASA Astrophysics Data System (ADS)
Tripathi, Ram; Wilson, John
2006-04-01
NASA has a new vision for space exploration in the 21st Century encompassing a broad range of human and robotic missions including missions to Moon, Mars and beyond. Exposure from the hazards of severe space radiation in deep space long duration missions is `the show stopper.' Thus, protection from the hazards of severe space radiation is of paramount importance for the new vision. Accurate risk assessments critically depend on the accuracy of the input information about the interaction of ions with materials, electronics and tissues. A huge amount of essential experimental information for all the ions in space, across the periodic table, for a wide range of energies of several (up to a Trillion) orders of magnitude are needed for the radiation protection engineering for space missions that is simply not available (due to the high costs) and probably never will be. In addition, the accuracy of the input information and database is very critical and of paramount importance for space exposure assessments particularly in view the agency's vision for deep space exploration. The vital role and importance of nuclear physics, related challenges and issues, for space missions will be discussed, and a few examples will be presented for space missions.
Experience Gained From Launch and Early Orbit Support of the Rossi X-Ray Timing Explorer (RXTE)
NASA Technical Reports Server (NTRS)
Fink, D. R.; Chapman, K. B.; Davis, W. S.; Hashmall, J. A.; Shulman, S. E.; Underwood, S. C.; Zsoldos, J. M.; Harman, R. R.
1996-01-01
this paper reports the results to date of early mission support provided by the personnel of the Goddard Space Flight Center Flight Dynamics Division (FDD) for the Rossi X-Ray Timing Explorer (RXTE) spacecraft. For this mission, the FDD supports onboard attitude determination and ephemeris propagation by supplying ground-based orbit and attitude solutions and calibration results. The first phase of that support was to provide launch window analyses. As the launch window was determined, acquisition attitudes were calculated and calibration slews were planned. postlaunch, these slews provided the basis for ground determined calibration. Ground determined calibration results are used to improve the accuracy of onboard solutions. The FDD is applying new calibration tools designed to facilitate use of the simultaneous, high-accuracy star observations from the two RXTE star trackers for ground attitude determination and calibration. An evaluation of the performance of these tools is presented. The FDD provides updates to the onboard star catalog based on preflight analysis and analysis of flight data. The in-flight results of the mission support in each area are summarized and compared with pre-mission expectations.
Results and Implications of Mineralogical Models for Chemical Sediments at Meridiani Planum
NASA Technical Reports Server (NTRS)
Clark, B. C.; McLennan, S. M.; Morris, R. V.; Gellert, R.; Jolliff, B.; Knoll, A.; Lowenstein, T. K.; Ming, D. W.; Tosca, N. J.; Christensen, P. R.
2005-01-01
The Mars Exploration Rover (MER) "Opportunity" has explored chemically-enriched sedimentary outcrops at Meridiani Planum, Mars. In its first year, three different crater sites - Eagle, Fram and Endurance - have been explored. Nineteen high-interest outcrop rocks were investigated by first grinding a hole to reach the interior (using the Rock Abrasion Tool, RAT), and then conducting APXS (alpha particle x-ray spectrometry) analysis, MB (M ssbauer) analysis, and close up imaging (MI, microscopic imager). Sixteen elements and four Fe-bearing minerals were assayed to good accuracy in each sample, producing 380 compositional data points. The Miniature Thermal Emission Spectrometer (Mini- TES) obtained spectra on outcrop materials which provide direct indication of several mineral classes. Preliminary reports on Eagle crater and three RAT samples have been published. Chemical trends and a derived mineralogical model for all RAT d outcrop samples to date has been developed.
1961-01-01
This is a comparison illustration of the Redstone, Jupiter-C, and Mercury Redstone launch vehicles. The Redstone ballistic missile was a high-accuracy, liquid-propelled, surface-to-surface missile. Originally developed as a nose cone re-entry test vehicle for the Jupiter intermediate range ballistic missile, the Jupiter-C was a modification of the Redstone missile and successfully launched the first American Satellite, Explorer-1, in orbit on January 31, 1958. The Mercury Redstone lifted off carrying the first American, astronaut Alan Shepard, in his Mercury spacecraft Freedom 7, on May 5, 1961.
NASA Astrophysics Data System (ADS)
Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei
2017-12-01
The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.
Soni, Jalpa; Purwar, Harsh; Lakhotia, Harshit; Chandel, Shubham; Banerjee, Chitram; Kumar, Uday; Ghosh, Nirmalya
2013-07-01
A novel spectroscopic Mueller matrix system has been developed and explored for both fluorescence and elastic scattering polarimetric measurements from biological tissues. The 4 × 4 Mueller matrix measurement strategy is based on sixteen spectrally resolved (λ = 400 - 800 nm) measurements performed by sequentially generating and analyzing four elliptical polarization states. Eigenvalue calibration of the system ensured high accuracy of Mueller matrix measurement over a broad wavelength range, either for forward or backscattering geometry. The system was explored for quantitative fluorescence and elastic scattering spectroscopic polarimetric studies on normal and precancerous tissue sections from human uterine cervix. The fluorescence spectroscopic Mueller matrices yielded an interesting diattenuation parameter, exhibiting differences between normal and precancerous tissues.
Laboratory Astrophysics: Enabling Scientific Discovery and Understanding
NASA Technical Reports Server (NTRS)
Kirby, K.
2006-01-01
NASA's Science Strategic Roadmap for Universe Exploration lays out a series of science objectives on a grand scale and discusses the various missions, over a wide range of wavelengths, which will enable discovery. Astronomical spectroscopy is arguably the most powerful tool we have for exploring the Universe. Experimental and theoretical studies in Laboratory Astrophysics convert "hard-won data into scientific understanding". However, the development of instruments with increasingly high spectroscopic resolution demands atomic and molecular data of unprecedented accuracy and completeness. How to meet these needs, in a time of severe budgetary constraints, poses a significant challenge both to NASA, the astronomical observers and model-builders, and the laboratory astrophysics community. I will discuss these issues, together with some recent examples of productive astronomy/lab astro collaborations.
Miller, R.; Black, W.; Miele, M.; Morgan, T.; Ivanov, J.; Xia, J.; Peterie, S.
2011-01-01
A high-resolution seismic reflection investigation mapped reflectors and identified characteristics potentially influencing the interpretation of the hydrogeology underlying a portion of the Oxnard Plain in Ventura County, California. Design and implementation of this study was heavily influenced by high levels of cultural noise from vehicles, power lines, roads, manufacturing facilities, and underground utilities/vaults. Acquisition and processing flows were tailored to this noisy environment and relatively shallow target interval. Layering within both upper and lower aquifer systems was delineated at a vertical resolution potential of around 2.5 m at 350 m depth. ?? 2011 Society of Exploration Geophysicists.
SU-F-J-95: Impact of Shape Complexity On the Accuracy of Gradient-Based PET Volume Delineation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Wu, G; Gao, Y
2016-06-15
Purpose: Explore correlation of tumor complexity shape with PET target volume accuracy when delineated with gradient-based segmentation tool. Methods: A total of 24 clinically realistic digital PET Monte Carlo (MC) phantoms of NSCLC were used in the study. The phantom simulated 29 thoracic lesions (lung primary and mediastinal lymph nodes) of varying size, shape, location, and {sup 18}F-FDG activity. A program was developed to calculate a curvature vector along the outline and the standard deviation of this vector was used as a metric to quantify a shape’s “complexity score”. This complexity score was calculated for standard geometric shapes and MC-generatedmore » target volumes in PET phantom images. All lesions were contoured using a commercially available gradient-based segmentation tool and the differences in volume from the MC-generated volumes were calculated as the measure of the accuracy of segmentation. Results: The average absolute percent difference in volumes between the MC-volumes and gradient-based volumes was 11% (0.4%–48.4%). The complexity score showed strong correlation with standard geometric shapes. However, no relationship was found between the complexity score and the accuracy of segmentation by gradient-based tool on MC simulated tumors (R{sup 2} = 0.156). When the lesions were grouped into primary lung lesions and mediastinal/mediastinal adjacent lesions, the average absolute percent difference in volumes were 6% and 29%, respectively. The former group is more isolated and the latter is more surround by tissues with relatively high SUV background. Conclusion: The complexity shape of NSCLC lesions has little effect on the accuracy of the gradient-based segmentation method and thus is not a good predictor of uncertainty in target volume delineation. Location of lesion within a relatively high SUV background may play a more significant role in the accuracy of gradient-based segmentation.« less
Impaired Eye Region Search Accuracy in Children with Autistic Spectrum Disorders
Pruett, John R.; Hoertel, Sarah; Constantino, John N.; LaMacchia Moll, Angela; McVey, Kelly; Squire, Emma; Feczko, Eric; Povinelli, Daniel J.; Petersen, Steven E.
2013-01-01
To explore mechanisms underlying reduced fixation of eyes in autism, children with Autistic Spectrum Disorders (ASD) and typically developing children were tested in five visual search experiments: simple color feature; color-shape conjunction; face in non-face objects; mouth region; and eye region. No group differences were found for reaction time profile shapes in any of the five experiments, suggesting intact basic search mechanics in children with ASD. Contrary to early reports in the literature, but consistent with other more recent findings, we observed no superiority for conjunction search in children with ASD. Importantly, children with ASD did show reduced accuracy for eye region search (p = .005), suggesting that eyes contribute less to high-level face representations in ASD or that there is an eye region-specific disruption to attentional processes engaged by search in ASD. PMID:23516446
Impaired eye region search accuracy in children with autistic spectrum disorders.
Pruett, John R; Hoertel, Sarah; Constantino, John N; Moll, Angela LaMacchia; McVey, Kelly; Squire, Emma; Feczko, Eric; Povinelli, Daniel J; Petersen, Steven E
2013-01-01
To explore mechanisms underlying reduced fixation of eyes in autism, children with autistic spectrum disorders (ASD) and typically developing children were tested in five visual search experiments: simple color feature; color-shape conjunction; face in non-face objects; mouth region; and eye region. No group differences were found for reaction time profile shapes in any of the five experiments, suggesting intact basic search mechanics in children with ASD. Contrary to early reports in the literature, but consistent with other more recent findings, we observed no superiority for conjunction search in children with ASD. Importantly, children with ASD did show reduced accuracy for eye region search (p = .005), suggesting that eyes contribute less to high-level face representations in ASD or that there is an eye region-specific disruption to attentional processes engaged by search in ASD.
GPR random noise reduction using BPD and EMD
NASA Astrophysics Data System (ADS)
Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz
2018-04-01
Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.
The Accuracy of Computer-Assisted Feedback and Students' Responses to It
ERIC Educational Resources Information Center
Lavolette, Elizabeth; Polio, Charlene; Kahng, Jimin
2015-01-01
Various researchers in second language acquisition have argued for the effectiveness of immediate rather than delayed feedback. In writing, truly immediate feedback is impractical, but computer-assisted feedback provides a quick way of providing feedback that also reduces the teacher's workload. We explored the accuracy of feedback from…
Utilization of the Deep Space Atomic Clock for Europa Gravitational Tide Recovery
NASA Technical Reports Server (NTRS)
Seubert, Jill; Ely, Todd
2015-01-01
Estimation of Europa's gravitational tide can provide strong evidence of the existence of a subsurface liquid ocean. Due to limited close approach tracking data, a Europa flyby mission suffers strong coupling between the gravity solution quality and tracking data quantity and quality. This work explores utilizing Low Gain Antennas with the Deep Space Atomic Clock (DSAC) to provide abundant high accuracy uplink-only radiometric tracking data. DSAC's performance, expected to exhibit an Allan Deviation of less than 3e-15 at one day, provides long-term stability and accuracy on par with the Deep Space Network ground clocks, enabling one-way radiometric tracking data with accuracy equivalent to that of its two-way counterpart. The feasibility of uplink-only Doppler tracking via the coupling of LGAs and DSAC and the expected Doppler data quality are presented. Violations of the Kalman filter's linearization assumptions when state perturbations are included in the flyby analysis results in poor determination of the Europa gravitational tide parameters. B-plane targeting constraints are statistically determined, and a solution to the linearization issues via pre-flyby approach orbit determination is proposed and demonstrated.
Novel X-ray Communication Based XNAV Augmentation Method Using X-ray Detectors
Song, Shibin; Xu, Luping; Zhang, Hua; Bai, Yuanjie
2015-01-01
The further development of X-ray pulsar-based NAVigation (XNAV) is hindered by its lack of accuracy, so accuracy improvement has become a critical issue for XNAV. In this paper, an XNAV augmentation method which utilizes both pulsar observation and X-ray ranging observation for navigation filtering is proposed to deal with this issue. As a newly emerged concept, X-ray communication (XCOM) shows great potential in space exploration. X-ray ranging, derived from XCOM, could achieve high accuracy in range measurement, which could provide accurate information for XNAV. For the proposed method, the measurement models of pulsar observation and range measurement observation are established, and a Kalman filtering algorithm based on the observations and orbit dynamics is proposed to estimate the position and velocity of a spacecraft. A performance comparison of the proposed method with the traditional pulsar observation method is conducted by numerical experiments. Besides, the parameters that influence the performance of the proposed method, such as the pulsar observation time, the SNR of the ranging signal, etc., are analyzed and evaluated by numerical experiments. PMID:26404295
ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.
Morota, Gota
2017-12-20
Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.
Impulsivity modulates performance under response uncertainty in a reaching task.
Tzagarakis, C; Pellizzer, G; Rogers, R D
2013-03-01
We sought to explore the interaction of the impulsivity trait with response uncertainty. To this end, we used a reaching task (Pellizzer and Hedges in Exp Brain Res 150:276-289, 2003) where a motor response direction was cued at different levels of uncertainty (1 cue, i.e., no uncertainty, 2 cues or 3 cues). Data from 95 healthy adults (54 F, 41 M) were analysed. Impulsivity was measured using the Barratt Impulsiveness Scale version 11 (BIS-11). Behavioral variables recorded were reaction time (RT), errors of commission (referred to as 'early errors') and errors of precision. Data analysis employed generalised linear mixed models and generalised additive mixed models. For the early errors, there was an interaction of impulsivity with uncertainty and gender, with increased errors for high impulsivity in the one-cue condition for women and the three-cue condition for men. There was no effect of impulsivity on precision errors or RT. However, the analysis of the effect of RT and impulsivity on precision errors showed a different pattern for high versus low impulsives in the high uncertainty (3 cue) condition. In addition, there was a significant early error speed-accuracy trade-off for women, primarily in low uncertainty and a 'reverse' speed-accuracy trade-off for men in high uncertainty. These results extend those of past studies of impulsivity which help define it as a behavioural trait that modulates speed versus accuracy response styles depending on environmental constraints and highlight once more the importance of gender in the interplay of personality and behaviour.
Regression Analysis of Optical Coherence Tomography Disc Variables for Glaucoma Diagnosis.
Richter, Grace M; Zhang, Xinbo; Tan, Ou; Francis, Brian A; Chopra, Vikas; Greenfield, David S; Varma, Rohit; Schuman, Joel S; Huang, David
2016-08-01
To report diagnostic accuracy of optical coherence tomography (OCT) disc variables using both time-domain (TD) and Fourier-domain (FD) OCT, and to improve the use of OCT disc variable measurements for glaucoma diagnosis through regression analyses that adjust for optic disc size and axial length-based magnification error. Observational, cross-sectional. In total, 180 normal eyes of 112 participants and 180 eyes of 138 participants with perimetric glaucoma from the Advanced Imaging for Glaucoma Study. Diagnostic variables evaluated from TD-OCT and FD-OCT were: disc area, rim area, rim volume, optic nerve head volume, vertical cup-to-disc ratio (CDR), and horizontal CDR. These were compared with overall retinal nerve fiber layer thickness and ganglion cell complex. Regression analyses were performed that corrected for optic disc size and axial length. Area-under-receiver-operating curves (AUROC) were used to assess diagnostic accuracy before and after the adjustments. An index based on multiple logistic regression that combined optic disc variables with axial length was also explored with the aim of improving diagnostic accuracy of disc variables. Comparison of diagnostic accuracy of disc variables, as measured by AUROC. The unadjusted disc variables with the highest diagnostic accuracies were: rim volume for TD-OCT (AUROC=0.864) and vertical CDR (AUROC=0.874) for FD-OCT. Magnification correction significantly worsened diagnostic accuracy for rim variables, and while optic disc size adjustments partially restored diagnostic accuracy, the adjusted AUROCs were still lower. Axial length adjustments to disc variables in the form of multiple logistic regression indices led to a slight but insignificant improvement in diagnostic accuracy. Our various regression approaches were not able to significantly improve disc-based OCT glaucoma diagnosis. However, disc rim area and vertical CDR had very high diagnostic accuracy, and these disc variables can serve to complement additional OCT measurements for diagnosis of glaucoma.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
Chen, Ru-huang; Jin, Gang
2015-08-01
This paper presented an application of mid-infrared (MIR), near-infrared (NIR) and Raman spectroscopies for collecting the spectra of 31 kinds of low density polyethylene/polyprolene (LDPE/PP) samples with different proportions. The different pre-processing methods (multiplicative scatter correction, mean centering and Savitzky-Golay first derivative) and spectral region were explored to develop partial least-squares (PLS) model for LDPE, their influence on the accuracy of PLS model also being discussed. Three spectroscopies were compared about the accuracy of quantitative measurement. Consequently, the pre-processing methods and spectral region have a great impact on the accuracy of PLS model, especially the spectra with subtle difference, random noise and baseline variation. After being pre-processed and spectral region selected, the calibration model of MIR, NIR and Raman exhibited R2/RMSEC values of 0.9906/2.941, 0.9973/1.561 and 0.9972/1.598 respectively, which corrsponding to 0.8876/10.15, 0.8493/11.75 and 0.8757/10.67 before any treatment. The results also suggested MIR, NIR and Raman are three strong tools to predict the content of LDPE in LDPE/PP blend. However, NIR and Raman showed higher accuracy after being pre-processed and more suitability to fast quantitative characterization due to their high measuring speed.
Classifying visuomotor workload in a driving simulator using subject specific spatial brain patterns
Dijksterhuis, Chris; de Waard, Dick; Brookhuis, Karel A.; Mulder, Ben L. J. M.; de Jong, Ritske
2013-01-01
A passive Brain Computer Interface (BCI) is a system that responds to the spontaneously produced brain activity of its user and could be used to develop interactive task support. A human-machine system that could benefit from brain-based task support is the driver-car interaction system. To investigate the feasibility of such a system to detect changes in visuomotor workload, 34 drivers were exposed to several levels of driving demand in a driving simulator. Driving demand was manipulated by varying driving speed and by asking the drivers to comply to individually set lane keeping performance targets. Differences in the individual driver's workload levels were classified by applying the Common Spatial Pattern (CSP) and Fisher's linear discriminant analysis to frequency filtered electroencephalogram (EEG) data during an off line classification study. Several frequency ranges, EEG cap configurations, and condition pairs were explored. It was found that classifications were most accurate when based on high frequencies, larger electrode sets, and the frontal electrodes. Depending on these factors, classification accuracies across participants reached about 95% on average. The association between high accuracies and high frequencies suggests that part of the underlying information did not originate directly from neuronal activity. Nonetheless, average classification accuracies up to 75–80% were obtained from the lower EEG ranges that are likely to reflect neuronal activity. For a system designer, this implies that a passive BCI system may use several frequency ranges for workload classifications. PMID:23970851
Texture- and deformability-based surface recognition by tactile image analysis.
Khasnobish, Anwesha; Pal, Monalisa; Tibarewala, D N; Konar, Amit; Pal, Kunal
2016-08-01
Deformability and texture are two unique object characteristics which are essential for appropriate surface recognition by tactile exploration. Tactile sensation is required to be incorporated in artificial arms for rehabilitative and other human-computer interface applications to achieve efficient and human-like manoeuvring. To accomplish the same, surface recognition by tactile data analysis is one of the prerequisites. The aim of this work is to develop effective technique for identification of various surfaces based on deformability and texture by analysing tactile images which are obtained during dynamic exploration of the item by artificial arms whose gripper is fitted with tactile sensors. Tactile data have been acquired, while human beings as well as a robot hand fitted with tactile sensors explored the objects. The tactile images are pre-processed, and relevant features are extracted from the tactile images. These features are provided as input to the variants of support vector machine (SVM), linear discriminant analysis and k-nearest neighbour (kNN) for classification. Based on deformability, six household surfaces are recognized from their corresponding tactile images. Moreover, based on texture five surfaces of daily use are classified. The method adopted in the former two cases has also been applied for deformability- and texture-based recognition of four biomembranes, i.e. membranes prepared from biomaterials which can be used for various applications such as drug delivery and implants. Linear SVM performed best for recognizing surface deformability with an accuracy of 83 % in 82.60 ms, whereas kNN classifier recognizes surfaces of daily use having different textures with an accuracy of 89 % in 54.25 ms and SVM with radial basis function kernel recognizes biomembranes with an accuracy of 78 % in 53.35 ms. The classifiers are observed to generalize well on the unseen test datasets with very high performance to achieve efficient material recognition based on its deformability and texture.
NASA Astrophysics Data System (ADS)
Sun, D.; Zheng, J. H.; Ma, T.; Chen, J. J.; Li, X.
2018-04-01
The rodent disaster is one of the main biological disasters in grassland in northern Xinjiang. The eating and digging behaviors will cause the destruction of ground vegetation, which seriously affected the development of animal husbandry and grassland ecological security. UAV low altitude remote sensing, as an emerging technique with high spatial resolution, can effectively recognize the burrows. However, how to select the appropriate spatial resolution to monitor the calamity of the rodent disaster is the first problem we need to pay attention to. The purpose of this study is to explore the optimal spatial scale on identification of the burrows by evaluating the impact of different spatial resolution for the burrows identification accuracy. In this study, we shoot burrows from different flight heights to obtain visible images of different spatial resolution. Then an object-oriented method is used to identify the caves, and we also evaluate the accuracy of the classification. We found that the highest classification accuracy of holes, the average has reached more than 80 %. At the altitude of 24 m and the spatial resolution of 1cm, the accuracy of the classification is the highest We have created a unique and effective way to identify burrows by using UAVs visible images. We draw the following conclusion: the best spatial resolution of burrows recognition is 1 cm using DJI PHANTOM-3 UAV, and the improvement of spatial resolution does not necessarily lead to the improvement of classification accuracy. This study lays the foundation for future research and can be extended to similar studies elsewhere.
Wedi, Nils P
2014-06-28
The steady path of doubling the global horizontal resolution approximately every 8 years in numerical weather prediction (NWP) at the European Centre for Medium Range Weather Forecasts may be substantially altered with emerging novel computing architectures. It coincides with the need to appropriately address and determine forecast uncertainty with increasing resolution, in particular, when convective-scale motions start to be resolved. Blunt increases in the model resolution will quickly become unaffordable and may not lead to improved NWP forecasts. Consequently, there is a need to accordingly adjust proven numerical techniques. An informed decision on the modelling strategy for harnessing exascale, massively parallel computing power thus also requires a deeper understanding of the sensitivity to uncertainty--for each part of the model--and ultimately a deeper understanding of multi-scale interactions in the atmosphere and their numerical realization in ultra-high-resolution NWP and climate simulations. This paper explores opportunities for substantial increases in the forecast efficiency by judicious adjustment of the formal accuracy or relative resolution in the spectral and physical space. One path is to reduce the formal accuracy by which the spectral transforms are computed. The other pathway explores the importance of the ratio used for the horizontal resolution in gridpoint space versus wavenumbers in spectral space. This is relevant for both high-resolution simulations as well as ensemble-based uncertainty estimation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Pile, Victoria; Lau, Jennifer Y F; Topor, Marta; Hedderly, Tammy; Robinson, Sally
2018-05-18
Aberrant interoceptive accuracy could contribute to the co-occurrence of anxiety and premonitory urge in chronic tic disorders (CTD). If it can be manipulated through intervention, it would offer a transdiagnostic treatment target for tics and anxiety. Interoceptive accuracy was first assessed consistent with previous protocols and then re-assessed following an instruction attempting to experimentally enhance awareness. The CTD group demonstrated lower interoceptive accuracy than controls but, importantly, this group difference was no longer significant following instruction. In the CTD group, better interoceptive accuracy was associated with higher anxiety and lower quality of life, but not with premonitory urge. Aberrant interoceptive accuracy may represent an underlying trait in CTD that can be manipulated, and relates to anxiety and quality of life.
Initial development of high-accuracy CFRP panel for DATE5 antenna
NASA Astrophysics Data System (ADS)
Qian, Yuan; Lou, Zheng; Hao, Xufeng; Zhu, Jing; Cheng, Jingquan; Wang, Hairen; Zuo, Yingxi; Yang, Ji
2016-07-01
DATE5 antenna, which is a 5m telescope for terahertz exploration, will be sited at Dome A, Antarctica. It is necessary to keep high surface accuracy of the primary reflector panels so that high observing efficiency can be achieved. In antenna field, carbon fiber reinforced composite (CFRP) sandwich panels are widely used as these panels are light in weight, high in strength, low in thermal expansion, and cheap in mass fabrication. In DATE5 project, CFRP panels are important panel candidates. In the design study phase, a CFRP prototype panel of 1-meter size is initially developed for the verification purpose. This paper introduces the material arrangement in the sandwich panel, measured performance of this testing sandwich structure samples, and together with the panel forming process. For anti-icing in the South Pole region, a special CFRP heating film is embedded in the front skin of sandwich panel. The properties of some types of basic building materials are tested. Base on the results, the deformation of prototype panel with different sandwich structures and skin layers are simulated and a best structural concept is selected. The panel mold used is a high accuracy one with a surface rms error of 1.4 μm. Prototype panels are replicated from the mold. Room temperature curing resin is used to reduce the thermal deformation in the resin transfer process. In the curing, vacuum negative pressure technology is also used to increase the volume content of carbon fiber. After the measurement of the three coordinate measure machine (CMM), a prototype CFRP panel of 5.1 μm rms surface error is developed initially.
Making and Measuring a Model of a Salt Marsh
ERIC Educational Resources Information Center
Fogleman, Tara; Curran, Mary Carla
2007-01-01
Students are often confused by the difference between the terms "accuracy" and "precision." In the following activities, students explore the definitions of accuracy and precision while learning about salt march ecology and the methods used by scientists to assess salt marsh health. The activities also address the concept that the ocean supports a…
Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs
ERIC Educational Resources Information Center
Bijleveld, Erik; Custers, Ruud; Aarts, Henk
2010-01-01
While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…
Grammatical Accuracy and Learner Autonomy in Advanced Writing
ERIC Educational Resources Information Center
Vickers, Caroline H.; Ene, Estela
2006-01-01
This paper aims to explore advanced ESL learners' ability to make improvements in grammatical accuracy by autonomously noticing and correcting their own grammatical errors. In the recent literature in SLA, it is suggested that classroom tasks can be used to foster autonomous language learning habits (cf. Dam 2001). Therefore, it is important to…
What Determines GCSE Marking Accuracy? An Exploration of Expertise among Maths and Physics Markers
ERIC Educational Resources Information Center
Suto, W. M. Irenka; Nadas, Rita
2008-01-01
Examination marking utilises a variety of cognitive processes, and from a psychological perspective, the demands that different questions place on markers will vary considerably. To what extent does marking accuracy vary among markers with differing backgrounds and experiences? More fundamentally, what makes some questions harder to mark…
Engel, Holger; Huang, Jung Ju; Tsao, Chung Kan; Lin, Chia-Yu; Chou, Pan-Yu; Brey, Eric M; Henry, Steven L; Cheng, Ming Huei
2011-11-01
This prospective study was designed to compare the accuracy rate between remote smartphone photographic assessments and in-person examinations for free flap monitoring. One hundred and three consecutive free flaps were monitored with in-person examinations and assessed remotely by three surgeons (Team A) via photographs transmitted over smartphone. Four other surgeons used the traditional in-person examinations as Team B. The response time to re-exploration was defined as the interval between when a flap was evaluated as compromised by the nurse/house officer and when the decision was made for re-exploration. The accuracy rate was 98.7% and 94.2% for in-person and smartphone photographic assessments, respectively. The response time of 8 ± 3 min in Team A was statistically shorter than the 180 ± 104 min in Team B (P = 0.01 by the Mann-Whitney test). The remote smartphone photography assessment has a comparable accuracy rate and shorter response time compared with in-person examination for free flap monitoring. Copyright © 2011 Wiley Periodicals, Inc.
Ship detection leveraging deep neural networks in WorldView-2 images
NASA Astrophysics Data System (ADS)
Yamamoto, T.; Kazama, Y.
2017-10-01
Interpretation of high-resolution satellite images has been so difficult that skilled interpreters must have checked the satellite images manually because of the following issues. One is the requirement of the high detection accuracy rate. The other is the variety of the target, taking ships for example, there are many kinds of ships, such as boat, cruise ship, cargo ship, aircraft carrier, and so on. Furthermore, there are similar appearance objects throughout the image; therefore, it is often difficult even for the skilled interpreters to distinguish what object the pixels really compose. In this paper, we explore the feasibility of object extraction leveraging deep learning with high-resolution satellite images, especially focusing on ship detection. We calculated the detection accuracy using the WorldView-2 images. First, we collected the training images labelled as "ship" and "not ship". After preparing the training data, we defined the deep neural network model to judge whether ships are existing or not, and trained them with about 50,000 training images for each label. Subsequently, we scanned the evaluation image with different resolution windows and extracted the "ship" images. Experimental result shows the effectiveness of the deep learning based object detection.
Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano; Llorens, Roberto
2018-06-01
The latest generation of head-mounted displays (HMDs) provides built-in head tracking, which enables estimating position in a room-size setting. This feature allows users to explore, navigate, and move within real-size virtual environments, such as kitchens, supermarket aisles, or streets. Previously, these actions were commonly facilitated by external peripherals and interaction metaphors. The objective of this study was to compare the Oculus Rift and the HTC Vive in terms of the working range of the head tracking and the working area, accuracy, and jitter in a room-size environment, and to determine their feasibility for serious games, rehabilitation, and health-related applications. The position of the HMDs was registered in a 10 × 10 grid covering an area of 25 m 2 at sitting (1.3 m) and standing (1.7 m) heights. Accuracy and jitter were estimated from positional data. The working range was estimated by moving the HMDs away from the cameras until no data were obtained. The HTC Vive provided a working area (24.87 m 2 ) twice as large as that of the Oculus Rift. Both devices showed excellent and comparable performance at sitting height (accuracy up to 1 cm and jitter <0.35 mm), and the HTC Vive presented worse but still excellent accuracy and jitter at standing height (accuracy up to 1.5 cm and jitter <0.5 mm). The HTC Vive presented a larger working range (7 m) than did the Oculus Rift (4.25 m). Our results support the use of these devices for real navigation, exploration, exergaming, and motor rehabilitation in virtual reality environments.
Attitude and position estimation on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Ali, Khaled S.; Vanelli, C. Anthony; Biesiadecki, Jeffrey J.; Maimone, Mark W.; Yang Cheng, A.; San Martin, Miguel; Alexander, James W.
2005-01-01
NASA/JPL 's Mars Exploration Rovers acquire their attitude upon command and autonomously propagate their attitude and position. The rovers use accelerometers and images of the sun to acquire attitude, autonomously searching the sky for the sun with a pointable camera. To propagate the attitude and position the rovers use either accelerometer and gyro readings or gyro readings and wheel odometiy, depending on the nature of the movement ground operators are commanding. Where necessary, visual odometry is performed on images to fine tune the position updates, particularly in high slip environments. The capability also exists for visual odometry attitude updates. This paper describes the techniques used by the rovers to acquire and maintain attitude and position knowledge, the accuracy which is obtainable, and lessons learned after more than one year in operation.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
NASA Technical Reports Server (NTRS)
Rock, Stephen M.; LeMaster, Edward A.
2001-01-01
Pseudolites can extend the availability of GPS-type positioning systems to a wide range of applications not possible with satellite-only GPS. One such application is Mars exploration, where the centimeter-level accuracy and high repeatability of CDGPS would make it attractive for rover positioning during autonomous exploration, sample collection, and habitat construction if it were available. Pseudolites distributed on the surface would allow multiple rovers and/or astronauts to share a common navigational reference. This would help enable cooperation for complicated science tasks, reducing the need for instructions from Earth and increasing the likelihood of mission success. Conventional GPS Pseudolite arrays require that the devices be pre-calibrated through a Survey of their locations, typically to sub-centimeter accuracy. This is a problematic task for robots on the surface of another planet. By using the GPS signals that the Pseudolites broadcast, however, it is possible to have the array self-survey its own relative locations, creating a SelfCalibrating Pseudolite Array (SCPA). This requires the use of GPS transceivers instead of standard pseudolites. Surveying can be done either at carrier- or code-phase levels. An overview of SCPA capabilities, system requirements, and self-calibration algorithms is presented in another work. The Aerospace Robotics Laboratory at Statif0id has developed a fully operational prototype SCPA. The array is able to determine the range between any two transceivers with either code- or carrier-phase accuracy, and uses this inter-transceiver ranging to determine the at-ray geometry. This paper presents results from field tests conducted at Stanford University demonstrating the accuracy of inter-transceiver ranging and its viability and utility for array localization, and shows how transceiver motion may be utilized to refine the array estimate by accurately determining carrier-phase integers and line biases. It also summarizes the overall system requirements and architecture, and describes the hardware and software used in the prototype system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merkle, K. L.; Csencsits, R.; Rynes, K. L.
In the absence of high-order aberrations, the lattice fringe technique should allow measurement of grain boundary rigid-body displacements to accuracies about an order of magnitude better than the point-to-point resolution of the transmission electron microscope. The three-fold astigmatism, however, introduces shifts of the lattice fringe pattern that depend on the orientation of the lattice relative to the direction of the three-fold astigmatism and thus produces an apparent shift between the two grains bordering the grain boundary. By image simulation of grain boundary model structures, the present paper explores the effect of these extraneous shifts on grain boundary volume expansion measurements.more » It is found that the shifts depend, among others, on zone axis direction and the magnitude of the lattice parameter. For many grain boundaries of interest, three-fold astigmatism correction to better than 100 nm appears necessary to achieve the desired accuracies.« less
Assessment of the Performance of a Dual-Frequency Surface Reference Technique
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang; Tanelli, Simone; Durden, Stephen
2013-01-01
The high correlation of the rain-free surface cross sections at two frequencies implies that the estimate of differential path integrated attenuation (PIA) caused by precipitation along the radar beam can be obtained to a higher degree of accuracy than the path-attenuation at either frequency. We explore this finding first analytically and then by examining data from the JPL dual-frequency airborne radar using measurements from the TC4 experiment obtained during July-August 2007. Despite this improvement in the accuracy of the differential path attenuation, solving the constrained dual-wavelength radar equations for parameters of the particle size distribution requires not only this quantity but the single-wavelength path attenuation as well. We investigate a simple method of estimating the single-frequency path attenuation from the differential attenuation and compare this with the estimate derived directly from the surface return.
Verification of low-Mach number combustion codes using the method of manufactured solutions
NASA Astrophysics Data System (ADS)
Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz
2007-11-01
Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.
Multi-beam range imager for autonomous operations
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.
1993-01-01
For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.
High-Accuracy Tidal Flat Digital Elevation Model Construction Using TanDEM-X Science Phase Data
NASA Technical Reports Server (NTRS)
Lee, Seung-Kuk; Ryu, Joo-Hyung
2017-01-01
This study explored the feasibility of using TanDEM-X (TDX) interferometric observations of tidal flats for digital elevation model (DEM) construction. Our goal was to generate high-precision DEMs in tidal flat areas, because accurate intertidal zone data are essential for monitoring coastal environment sand erosion processes. To monitor dynamic coastal changes caused by waves, currents, and tides, very accurate DEMs with high spatial resolution are required. The bi- and monostatic modes of the TDX interferometer employed during the TDX science phase provided a great opportunity for highly accurate intertidal DEM construction using radar interferometry with no time lag (bistatic mode) or an approximately 10-s temporal baseline (monostatic mode) between the master and slave synthetic aperture radar image acquisitions. In this study, DEM construction in tidal flat areas was first optimized based on the TDX system parameters used in various TDX modes. We successfully generated intertidal zone DEMs with 57-m spatial resolutions and interferometric height accuracies better than 0.15 m for three representative tidal flats on the west coast of the Korean Peninsula. Finally, we validated these TDX DEMs against real-time kinematic-GPS measurements acquired in two tidal flat areas; the correlation coefficient was 0.97 with a root mean square error of 0.20 m.
The effect of input data transformations on object-based image analysis
LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.
2011-01-01
The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829
Dictionary-Based Tensor Canonical Polyadic Decomposition
NASA Astrophysics Data System (ADS)
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Task-Based Variability in Children's Singing Accuracy
ERIC Educational Resources Information Center
Nichols, Bryan E.
2016-01-01
The purpose of this study was to explore the effect of task demands on children's singing accuracy. A 2 × 4 factorial design was used to examine the performance of fourth-grade children (N = 120) in solo and doubled response conditions. Each child sang four task types: single pitch, interval, pattern, and the song "Jingle Bells." The…
Accuracy of Tracking Forest Machines with GPS
M.W. Veal; S.E. Taylor; T.P. McDonald; D.K. McLemore; M.R. Dunn
2001-01-01
This paper describes the results of a study that measured the accuracy of using GPS to track movement offorest machines. Two different commercially available GPS receivers (Trimble ProXR and GeoExplorer II) were used to track wheeled skidders under three different canopy conditions at two different vehicle speeds. Dynamic GPS data were compared to position data...
ERIC Educational Resources Information Center
Doe, Sue R.; Gingerich, Karla J.; Richards, Tracy L.
2013-01-01
This study explored graduate teaching assistant (GTA) grading on 480 papers across two writing assignments as integrated into large Introductory Psychology courses. We measured GTA accuracy, consistency, and commenting (feedback) quality. Results indicate that GTA graders improved, although unevenly, in accuracy and consistency from Time 1 to 2…
ERIC Educational Resources Information Center
Tretter, Thomas R.; Jones, M. Gail; Minogue, James
2006-01-01
The use of unifying themes that span the various branches of science is recommended to enhance curricular coherence in science instruction. Conceptions of spatial scale are one such unifying theme. This research explored the accuracy of spatial scale conceptions of science phenomena across a spectrum of 215 participants: fifth grade, seventh…
ERIC Educational Resources Information Center
Saglam, Murat
2015-01-01
This study explored the relationship between accuracy of and confidence in performance of 114 prospective primary school teachers in answering diagnostic questions on potential difference in parallel electric circuits. The participants were required to indicate their confidence in their answers for each question. Bias and calibration indices were…
[Single shot fast spin echo sequence MRI cholangiopancreatography].
Lefèvre, F; Crouzet, P; Gaucher, H; Chapuis, F; Béot, S; Boccaccini, H; Bazin, C; Régent, D
1998-05-01
To assess the value of single shot fast spin echo MR sequence (SS-FSE) in the morphological analysis of the biliary tree and pancreatic ducts and to compare its accuracy with other imaging methods. 95 consecutive patients referred for clinical and/or biological suspicion of biliary obstruction were explored with MR cholangiopancreatography (MRCP). All patients were explored with a Signa 1.5 T GE MR unit, with High Gradient Field Strength and Torso Phased Array Coil. Biliary ducts were explored with SS-FSE sequence, coronal and oblique coronal 20 mm thick slices on a 256 x 256 matrix. Total acquisition time was 1 second. Native pictures were reviewed by two radiologists blinded to clinical information. In case of disagreement, a third radiologist's judgement was requested. In 88 cases, MRCP results were compared with direct biligraphy methods. In all cases, MRCP produced high quality images without MIP or other post-processing methods. For detection of biliary tree distensions, the concordance value of MRCP was over 91% (Kappa 0.82). For detection of biliary tree and/or pancreatic duct obstruction, MR sensitivity was 100% and specificity 91%. The overall diagnostic concordance value of MRCP was > or = 93%. Difficulties in MRCP were caused by functional diseases or benign stenosis. MRCP accurately diagnosed all lithiasic obstructions starting from a stone size of 3 mm. MRCP produces fastly high-quality images. As it is totally safe, it can be proposed as a first intention method in biliopancreatic duct explorations.
A celestial assisted INS initialization method for lunar explorers.
Ning, Xiaolin; Wang, Longhua; Wu, Weiren; Fang, Jiancheng
2011-01-01
The second and third phases of the Chinese Lunar Exploration Program (CLEP) are planning to achieve Moon landing, surface exploration and automated sample return. In these missions, the inertial navigation system (INS) and celestial navigation system (CNS) are two indispensable autonomous navigation systems which can compensate for limitations in the ground based navigation system. The accurate initialization of the INS and the precise calibration of the CNS are needed in order to achieve high navigation accuracy. Neither the INS nor the CNS can solve the above problems using the ground controllers or by themselves on the lunar surface. However, since they are complementary to each other, these problems can be solved by combining them together. A new celestial assisted INS initialization method is presented, in which the initial position and attitude of the explorer as well as the inertial sensors' biases are estimated by aiding the INS with celestial measurements. Furthermore, the systematic error of the CNS is also corrected by the help of INS measurements. Simulations show that the maximum error in position is 300 m and in attitude 40″, which demonstrates this method is a promising and attractive scheme for explorers on the lunar surface.
A Celestial Assisted INS Initialization Method for Lunar Explorers
Ning, Xiaolin; Wang, Longhua; Wu, Weiren; Fang, Jiancheng
2011-01-01
The second and third phases of the Chinese Lunar Exploration Program (CLEP) are planning to achieve Moon landing, surface exploration and automated sample return. In these missions, the inertial navigation system (INS) and celestial navigation system (CNS) are two indispensable autonomous navigation systems which can compensate for limitations in the ground based navigation system. The accurate initialization of the INS and the precise calibration of the CNS are needed in order to achieve high navigation accuracy. Neither the INS nor the CNS can solve the above problems using the ground controllers or by themselves on the lunar surface. However, since they are complementary to each other, these problems can be solved by combining them together. A new celestial assisted INS initialization method is presented, in which the initial position and attitude of the explorer as well as the inertial sensors’ biases are estimated by aiding the INS with celestial measurements. Furthermore, the systematic error of the CNS is also corrected by the help of INS measurements. Simulations show that the maximum error in position is 300 m and in attitude 40″, which demonstrates this method is a promising and attractive scheme for explorers on the lunar surface. PMID:22163998
Calibration of gyro G-sensitivity coefficients with FOG monitoring on precision centrifuge
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Yang, Yanqiang; Li, Baoguo; Liu, Ming
2017-07-01
The advantages of mechanical gyros, such as high precision, endurance and reliability, make them widely used as the core parts of inertial navigation systems (INS) utilized in the fields of aeronautics, astronautics and underground exploration. In a high-g environment, the accuracy of gyros is degraded. Therefore, the calibration and compensation of the gyro G-sensitivity coefficients is essential when the INS operates in a high-g environment. A precision centrifuge with a counter-rotating platform is the typical equipment for calibrating the gyro, as it can generate large centripetal acceleration and keep the angular rate close to zero; however, its performance is seriously restricted by the angular perturbation in the high-speed rotating process. To reduce the dependence on the precision of the centrifuge and counter-rotating platform, an effective calibration method for the gyro g-sensitivity coefficients under fiber-optic gyroscope (FOG) monitoring is proposed herein. The FOG can efficiently compensate spindle error and improve the anti-interference ability. Harmonic analysis is performed for data processing. Simulations show that the gyro G-sensitivity coefficients can be efficiently estimated to up to 99% of the true value and compensated using a lookup table or fitting method. Repeated tests indicate that the G-sensitivity coefficients can be correctly calibrated when the angular rate accuracy of the precision centrifuge is as low as 0.01%. Verification tests are performed to demonstrate that the attitude errors can be decreased from 0.36° to 0.08° in 200 s. The proposed measuring technology is generally applicable in engineering, as it can reduce the accuracy requirements for the centrifuge and the environment.
Bertoux, Maxime; de Souza, Leonardo Cruz; O'Callaghan, Claire; Greve, Andrea; Sarazin, Marie; Dubois, Bruno; Hornberger, Michael
2016-01-01
Relative sparing of episodic memory is a diagnostic criterion of behavioral variant frontotemporal dementia (bvFTD). However, increasing evidence suggests that bvFTD patients can show episodic memory deficits at a similar level as Alzheimer's disease (AD). Social cognition tasks have been proposed to distinguish bvFTD, but no study to date has explored the utility of such tasks for the diagnosis of amnestic bvFTD. Here, we contrasted social cognition performance of amnestic and non-amnestic bvFTD from AD, with a subgroup having confirmed in vivo pathology markers. Ninety-six participants (38 bvFTD and 28 AD patients as well as 30 controls) performed the short Social-cognition and Emotional Assessment (mini-SEA). BvFTD patients were divided into amnestic versus non-amnestic presentation using the validated Free and Cued Selective Reminding Test (FCSRT) assessing episodic memory. As expected, the accuracy of the FCSRT to distinguish the overall bvFTD group from AD was low (69.7% ) with ∼50% of bvFTD patients being amnestic. By contrast, the diagnostic accuracy of the mini-SEA was high (87.9% ). When bvFTD patients were split on the level of amnesia, mini-SEA diagnostic accuracy remained high (85.1% ) for amnestic bvFTD versus AD and increased to very high (93.9% ) for non-amnestic bvFTD versus AD. Social cognition deficits can distinguish bvFTD and AD regardless of amnesia to a high degree and provide a simple way to distinguish both diseases at presentation. These findings have clear implications for the diagnostic criteria of bvFTD. They suggest that the emphasis should be on social cognition deficits with episodic memory deficits not being a helpful diagnostic criterion in bvFTD.
Imaging issues for interferometry with CGH null correctors
NASA Astrophysics Data System (ADS)
Burge, James H.; Zhao, Chunyu; Zhou, Ping
2010-07-01
Aspheric surfaces, such as telescope mirrors, are commonly measured using interferometry with computer generated hologram (CGH) null correctors. The interferometers can be made with high precision and low noise, and CGHs can control wavefront errors to accuracy approaching 1 nm for difficult aspheric surfaces. However, such optical systems are typically poorly suited for high performance imaging. The aspheric surface must be viewed through a CGH that was intentionally designed to introduce many hundreds of waves of aberration. The imaging aberrations create difficulties for the measurements by coupling both geometric and diffraction effects into the measurement. These issues are explored here, and we show how the use of larger holograms can mitigate these effects.
Influence of skin colour on diagnostic accuracy of the jaundice meter JM 103 in newborns.
Samiee-Zafarghandy, S; Feberova, J; Williams, K; Yasseen, A S; Perkins, S L; Lemyre, B
2014-11-01
To assess the diagnostic accuracy of the JM 103 as a screening tool for neonatal jaundice and explore differential effects based on skin colour. We prospectively compared the transcutaneous bilirubin (TcB) and serum bilirubin (TSB) measurements of newborns over a 3 month-period. Skin colour was assigned via reference colour swatches. Diagnostic measures of the TcB/TSB comparison were made and clinically relevant TcB cut-off values were determined for each skin colour group. 451 infants (51 light, 326 medium and 74 dark skin colour) were recruited. The association between TcB and TSB was high for all skin colours (rs>0.9). The Bland-Altman analysis showed an absolute mean difference between the two measures of 13.3±26.4 µmol/L with broad limits of agreement (-39.4-66.0 µmol/L), with TcB underestimating TSB in light and medium skin colours and overestimating in dark skin colour. Diagnostic measures were also consistently high across skin colours, with no clinically significant differences observed. The JM 103 is a useful screening tool to identify infants in need of serum bilirubin, regardless of skin colour. The effect of skin colour on the accuracy of this device at high levels of serum bilirubin could not be assessed fully due to small numbers in the light and dark groups. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Fowler, J Christopher; Madan, Alok; Allen, Jon G; Patriquin, Michelle; Sharp, Carla; Oldham, John M; Frueh, B Christopher
2018-01-01
With the publication of DSM 5 alternative model for personality disorders it is critical to assess the components of the model against evidence-based models such as the five factor model and the DSM-IV-TR categorical model. This study explored the relative clinical utility of these models in screening for borderline personality disorder (BPD). Receiver operator characteristics and diagnostic efficiency statistics were calculated for three personality measures to ascertain the relative diagnostic efficiency of each measure. A total of 1653 adult inpatients at a specialist psychiatric hospital completed SCID-II interviews. Sample 1 (n=653) completed the SCID-II interviews, SCID-II Questionnaire (SCID-II-PQ) and the Big Five Inventory (BFI), while Sample 2 (n=1,000) completed the SCID-II interviews, Personality Inventory for DSM5 (PID-5) and the BFI. BFI measure evidenced moderate accuracy for two composites: High Neuroticism+ low agreeableness composite (AUC=0.72, SE=0.01, p<0.001) and High Neuroticism+ Low+Low Conscientiousness (AUC=0.73, SE=0.01, p<0.0001). The SCID-II-PQ evidenced moderate-to-excellent accuracy (AUC=0.86, SE=0.02, p<0.0001) with a good balance of specificity (SP=0.80) and sensitivity (SN=0.78). The PID-5 BPD algorithm (consisting of elevated emotional lability, anxiousness, separation insecurity, hostility, depressivity, impulsivity, and risk taking) evidenced moderate-to-excellent accuracy (AUC=0.87, SE=0.01, p<0.0001) with a good balance of specificity (SP=0.76) and sensitivity (SN=0.81). Findings generally support the use of SCID-II-PQ and PID-5 BPD algorithm for screening purposes. Furthermore, findings support the accuracy of the DSM 5 alternative model Criteria B trait constellation for diagnosing BPD. Limitations of the study include the single inpatient setting and use of two discrete samples to assess PID-5 and SCID-II-PQ. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca
2016-10-01
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
Gillian, Jeffrey K.; Karl, Jason W.; Elaksher, Ahmed; Duniway, Michael C.
2017-01-01
Structure-from-motion (SfM) photogrammetry from unmanned aerial system (UAS) imagery is an emerging tool for repeat topographic surveying of dryland erosion. These methods are particularly appealing due to the ability to cover large landscapes compared to field methods and at reduced costs and finer spatial resolution compared to airborne laser scanning. Accuracy and precision of high-resolution digital terrain models (DTMs) derived from UAS imagery have been explored in many studies, typically by comparing image coordinates to surveyed check points or LiDAR datasets. In addition to traditional check points, this study compared 5 cm resolution DTMs derived from fixed-wing UAS imagery with a traditional ground-based method of measuring soil surface change called erosion bridges. We assessed accuracy by comparing the elevation values between DTMs and erosion bridges along thirty topographic transects each 6.1 m long. Comparisons occurred at two points in time (June 2014, February 2015) which enabled us to assess vertical accuracy with 3314 data points and vertical precision (i.e., repeatability) with 1657 data points. We found strong vertical agreement (accuracy) between the methods (RMSE 2.9 and 3.2 cm in June 2014 and February 2015, respectively) and high vertical precision for the DTMs (RMSE 2.8 cm). Our results from comparing SfM-generated DTMs to check points, and strong agreement with erosion bridge measurements suggests repeat UAS imagery and SfM processing could replace erosion bridges for a more synoptic landscape assessment of shifting soil surfaces for some studies. However, while collecting the UAS imagery and generating the SfM DTMs for this study was faster than collecting erosion bridge measurements, technical challenges related to the need for ground control networks and image processing requirements must be addressed before this technique could be applied effectively to large landscapes.
NASA Technical Reports Server (NTRS)
Shiffman, Smadar
2004-01-01
Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.
Seeking a valid gold standard for an innovative, dialect-neutral language test.
Pearson, Barbara Zurer; Jackson, Janice E; Wu, Haotian
2014-04-01
PURPOSE In this study, the authors explored alternative gold standards to validate an innovative, dialect-neutral language assessment. METHOD Participants were 78 African American children, ages 5;0 (years;months) to 6;11. Twenty participants had previously been identified as having language impairment. The Diagnostic Evaluation of Language Variation-Norm Referenced (DELV-NR; Seymour, Roeper, & J. de Villiers, 2005) was administered, and concurrent language samples (LSs) were collected. Using LS profiles as the gold standard, sensitivity, specificity, and other measures of diagnostic accuracy were compared for diagnoses made from the DELV-NR and participants' clinical status prior to recruitment. In a second analysis, the authors used results from the first analysis to make evidence-based adjustments in the estimates of DELV-NR diagnostic accuracy. RESULTS Accuracy of the DELV-NR relative to LS profiles was greater than that of prior diagnoses, indicating that the DELV-NR was an improvement over preexisting diagnoses for this group. Specificity met conventional standards, but sensitivity was somewhat low. Reanalysis using the positive and negative predictive power of the preexisting diagnosis in a discrepant-resolution procedure revealed that estimates for sensitivity and specificity for the DELV-NR were .85 and .93, respectively. CONCLUSION The authors found that, even after making allowances for the imperfection of available gold standards, clinical decisions made with the DELV-NR achieved high values on conventional measures of diagnostic accuracy.
Zewdie, Getie A.; Cox, Dennis D.; Neely Atkinson, E.; Cantor, Scott B.; MacAulay, Calum; Davies, Kalatu; Adewole, Isaac; Buys, Timon P. H.; Follen, Michele
2012-01-01
Abstract. Optical spectroscopy has been proposed as an accurate and low-cost alternative for detection of cervical intraepithelial neoplasia. We previously published an algorithm using optical spectroscopy as an adjunct to colposcopy and found good accuracy (sensitivity=1.00 [95% confidence interval (CI)=0.92 to 1.00], specificity=0.71 [95% CI=0.62 to 0.79]). Those results used measurements taken by expert colposcopists as well as the colposcopy diagnosis. In this study, we trained and tested an algorithm for the detection of cervical intraepithelial neoplasia (i.e., identifying those patients who had histology reading CIN 2 or worse) that did not include the colposcopic diagnosis. Furthermore, we explored the interaction between spectroscopy and colposcopy, examining the importance of probe placement expertise. The colposcopic diagnosis-independent spectroscopy algorithm had a sensitivity of 0.98 (95% CI=0.89 to 1.00) and a specificity of 0.62 (95% CI=0.52 to 0.71). The difference in the partial area under the ROC curves between spectroscopy with and without the colposcopic diagnosis was statistically significant at the patient level (p=0.05) but not the site level (p=0.13). The results suggest that the device has high accuracy over a wide range of provider accuracy and hence could plausibly be implemented by providers with limited training. PMID:22559693
Reaction time and accuracy in individuals with aphasia during auditory vigilance tasks.
Laures, Jacqueline S
2005-11-01
Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory vigilance tasks. Findings indicate that the aphasic group was less accurate during both tasks than the control group, but was not slower in their accurate responses. Further examination of the data revealed variability in the aphasic participants' RT contributing to the lower accuracy scores.
Sarig Bahat, Hilla; Chen, Xiaoqi; Reznik, David; Kodesh, Einat; Treleaven, Julia
2015-04-01
Chronic neck pain has been consistently shown to be associated with impaired kinematic control including reduced range, velocity and smoothness of cervical motion, that seem relevant to daily function as in quick neck motion in response to surrounding stimuli. The objectives of this study were: to compare interactive cervical kinematics in patients with neck pain and controls; to explore the new measures of cervical motion accuracy; and to find the sensitivity, specificity, and optimal cutoff values for defining impaired kinematics in those with neck pain. In this cross-section study, 33 patients with chronic neck pain and 22 asymptomatic controls were assessed for their cervical kinematic control using interactive virtual reality hardware and customized software utilizing a head mounted display with built-in head tracking. Outcome measures included peak and mean velocity, smoothness (represented by number of velocity peaks (NVP)), symmetry (represented by time to peak velocity percentage (TTPP)), and accuracy of cervical motion. Results demonstrated significant and strong effect-size differences in peak and mean velocities, NVP and TTPP in all directions excluding TTPP in left rotation, and good effect-size group differences in 5/8 accuracy measures. Regression results emphasized the high clinical value of neck motion velocity, with very high sensitivity and specificity (85%-100%), followed by motion smoothness, symmetry and accuracy. These finding suggest cervical kinematics should be evaluated clinically, and screened by the provided cut off values for identification of relevant impairments in those with neck pain. Such identification of presence or absence of kinematic impairments may direct treatment strategies and additional evaluation when needed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spectral-element Method for 3D Marine Controlled-source EM Modeling
NASA Astrophysics Data System (ADS)
Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.
2017-12-01
As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Olejník, Peter; Nosal, Matej; Havran, Tomas; Furdova, Adriana; Cizmar, Maros; Slabej, Michal; Thurzo, Andrej; Vitovic, Pavol; Klvac, Martin; Acel, Tibor; Masura, Jozef
2017-01-01
To evaluate the accuracy of the three-dimensional (3D) printing of cardiovascular structures. To explore whether utilisation of 3D printed heart replicas can improve surgical and catheter interventional planning in patients with complex congenital heart defects. Between December 2014 and November 2015 we fabricated eight cardiovascular models based on computed tomography data in patients with complex spatial anatomical relationships of cardiovascular structures. A Bland-Altman analysis was used to assess the accuracy of 3D printing by comparing dimension measurements at analogous anatomical locations between the printed models and digital imagery data, as well as between printed models and in vivo surgical findings. The contribution of 3D printed heart models for perioperative planning improvement was evaluated in the four most representative patients. Bland-Altman analysis confirmed the high accuracy of 3D cardiovascular printing. Each printed model offered an improved spatial anatomical orientation of cardiovascular structures. Current 3D printers can produce authentic copies of patients` cardiovascular systems from computed tomography data. The use of 3D printed models can facilitate surgical or catheter interventional procedures in patients with complex congenital heart defects due to better preoperative planning and intraoperative orientation.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Interlaboratory comparison measurements of aspheres
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Fortmeier, I.; Stavridis, M.; Asfour, J.; Berger, G.; Bergmann, R. B.; Beutler, A.; Blümel, T.; Klawitter, H.; Kubo, K.; Liebl, J.; Löffler, F.; Meeß, R.; Pruss, C.; Ramm, D.; Sandner, M.; Schneider, G.; Wendel, M.; Widdershoven, I.; Schulz, M.; Elster, C.
2018-05-01
The need for high-quality aspheres is rapidly growing, necessitating increased accuracy in their measurement. A reliable uncertainty assessment of asphere form measurement techniques is difficult due to their complexity. In order to explore the accuracy of current asphere form measurement techniques, an interlaboratory comparison was carried out in which four aspheres were measured by eight laboratories using tactile measurements, optical point measurements, and optical areal measurements. Altogether, 12 different devices were employed. The measurement results were analysed after subtracting the design topography and subsequently a best-fit sphere from the measurements. The surface reduced in this way was compared to a reference topography that was obtained by taking the pointwise median across the ensemble of reduced topographies on a 1000 × 1000 Cartesian grid. The deviations of the reduced topographies from the reference topography were analysed in terms of several characteristics including peak-to-valley and root-mean-square deviations. Root-mean-square deviations of the reduced topographies from the reference topographies were found to be on the order of some tens of nanometres up to 89 nm, with most of the deviations being smaller than 20 nm. Our results give an indication of the accuracy that can currently be expected in form measurements of aspheres.
NASA Astrophysics Data System (ADS)
Koneshov, V. N.; Nepoklonov, V. B.
2018-05-01
The development of studies on estimating the accuracy of the Earth's modern global gravity models in terms of the spherical harmonics of the geopotential in the problematic regions of the world is discussed. The comparative analysis of the results of reconstructing quasi-geoid heights and gravity anomalies from the different models is carried out for two polar regions selected within a radius of 1000 km from the North and South poles. The analysis covers nine recently developed models, including six high-resolution models and three lower order models, including the Russian GAOP2012 model. It is shown that the modern models determine the quasi-geoid heights and gravity anomalies in the polar regions with errors of 5 to 10 to a few dozen cm and from 3 to 5 to a few dozen mGal, respectively, depending on the resolution. The accuracy of the models in the Arctic is several times higher than in the Antarctic. This is associated with the peculiarities of gravity anomalies in every particular region and with the fact that the polar part of the Antarctic has been comparatively less explored by the gravity methods than the polar Arctic.
Predictors of nutrition information comprehension in adulthood.
Miller, Lisa M Soederberg; Gibson, Tanja N; Applegate, Elizabeth A
2010-07-01
The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Ninety-three participants, ages 18-80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased-and knowledge increased-with age. When knowledge was statistically controlled, age declines in comprehension increased. Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
Predictors of Nutrition Information Comprehension in Adulthood
Miller, Lisa M. Soederberg; Gibson, Tanja N.; Applegate, Elizabeth A.
2009-01-01
Objective The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Methods Ninety-three participants, ages 18 to 80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. Results In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased- and knowledge increased -with age. When knowledge was statistically controlled, age declines in comprehension increased. Conclusion Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Practice Implications Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. PMID:19854605
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
Acquisition of Codas in Spanish as a First Language: The Role of Accuracy, Markedness and Frequency
ERIC Educational Resources Information Center
Polo, Nuria
2018-01-01
Studies on the acquisition of Spanish as a first language do not agree on the patterns and factors relevant for coda development. In order to shed light on the questions involved, a longitudinal study of coda development in Northern European Spanish was carried out to explore the relationship between accuracy, markedness and frequency. The study…
ERIC Educational Resources Information Center
Borgna, Georgianna; Convertino, Carol; Marschark, Marc; Morrison, Carolyn; Rizzolo, Kathleen
2011-01-01
Four experiments, each building on the results of the previous ones, explored the effects of several manipulations on learning and the accuracy of metacognitive judgments among deaf and hard-of-hearing (DHH) students. Experiment 1 examined learning and metacognitive accuracy from classroom lectures with or without prior "scaffolding" in the form…
Evidence on the Effectiveness of Comprehensive Error Correction in Second Language Writing
ERIC Educational Resources Information Center
Van Beuningen, Catherine G.; De Jong, Nivja H.; Kuiken, Folkert
2012-01-01
This study investigated the effect of direct and indirect comprehensive corrective feedback (CF) on second language (L2) learners' written accuracy (N = 268). The study set out to explore the value of CF as a revising tool as well as its capacity to support long-term accuracy development. In addition, we tested Truscott's (e.g., 2001, 2007) claims…
ERIC Educational Resources Information Center
Majetic, Cassie; Pellegrino, Catherine
2014-01-01
The skill set associated with lifelong scientific literacy often includes the ability to decode the content and accuracy of scientific research as presented in the media. However, students often find decoding science in the media difficult, due to limited content knowledge and shifting definitions of accuracy. Faculty have developed a variety of…
Mansour, Jamal K; Beaudry, Jennifer L; Bertrand, Michelle I; Kalmet, Natalie; Melsom, Elisabeth I; Lindsay, Roderick C L
2012-12-01
Prior research indicates that disguise negatively affects lineup identifications, but the mechanisms by which disguise works have not been explored, and different disguises have not been compared. In two experiments (Ns = 87 and 91) we manipulated degree of coverage by two different types of disguise: a stocking mask or sunglasses and toque (i.e., knitted hat). Participants viewed mock-crime videos followed by simultaneous or sequential lineups. Disguise and lineup type did not interact. In support of the view that disguise prevents encoding, identification accuracy generally decreased with degree of disguise. For the stocking disguise, however, full and 2/3 coverage led to approximately the same rate of correct identifications--which suggests that disrupting encoding of specific features may be as detrimental as disrupting a whole face. Accuracy was most affected by sunglasses and we discuss the role metacognitions may have played. Lineup selections decreased more slowly than accuracy as coverage by disguise increased, indicating witnesses are insensitive to the effect of encoding conditions on accuracy. We also explored the impact of disguise and lineup type on witnesses' confidence in their lineup decisions, though the results were not straightforward.
Testing the exclusivity effect in location memory.
Clark, Daniel P A; Dunn, Andrew K; Baguley, Thom
2013-01-01
There is growing literature exploring the possibility of parallel retrieval of location memories, although this literature focuses primarily on the speed of retrieval with little attention to the accuracy of location memory recall. Baguley, Lansdale, Lines, and Parkin (2006) found that when a person has two or more memories for an object's location, their recall accuracy suggests that only one representation can be retrieved at a time (exclusivity). This finding is counterintuitive given evidence of non-exclusive recall in the wider memory literature. The current experiment explored the exclusivity effect further and aimed to promote an alternative outcome (i.e., independence or superadditivity) by encouraging the participants to combine multiple representations of space at encoding or retrieval. This was encouraged by using anchor (points of reference) labels that could be combined to form a single strongly associated combination. It was hypothesised that the ability to combine the anchor labels would allow the two representations to be retrieved concurrently, generating higher levels of recall accuracy. The results demonstrate further support for the exclusivity hypothesis, showing no significant improvement in recall accuracy when there are multiple representations of a target object's location as compared to a single representation.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Giraldo-Cadavid, Luis F; Agudelo-Otalora, Luis Mauricio; Burguete, Javier; Arbulu, Mario; Moscoso, William Daniel; Martínez, Fabio; Ortiz, Andrés Felipe; Diaz, Juan; Pantoja, Jaime A; Rueda-Arango, Andrés Felipe; Fernández, Secundino
2016-05-10
Laryngo-pharyngeal mechano-sensitivity (LPMS) is involved in dysphagia, sleep apnea, stroke, irritable larynx syndrome and cough hypersensitivity syndrome among other disorders. These conditions are associated with a wide range of airway reflex abnormalities. However, the current device for exploring LPMS is limited because it assesses only the laryngeal adductor reflex during fiber-optic endoscopic evaluations of swallowing and requires a high degree of expertise to obtain reliable results, introducing intrinsic expert variability and subjectivity. We designed, developed and validated a new air-pulse laryngo-pharyngeal endoscopic esthesiometer with a built-in laser range-finder (LPEER) based on the evaluation and control of air-pulse variability determinants and on intrinsic observer variability and subjectivity determinants of the distance, angle and site of stimulus impact. The LPEER was designed to be capable of delivering precise and accurate stimuli with a wide range of intensities that can explore most laryngo-pharyngeal reflexes. We initially explored the potential factors affecting the reliability of LPMS tests and included these factors in a multiple linear regression model. The following factors significantly affected the precision and accuracy of the test (P < 0.001): the tube conducting the air-pulses, the supply pressure of the system, the duration of the air-pulses, and the distance and angle between the end of the tube conducting the air-pulses and the site of impact. To control all of these factors, an LPEER consisting of an air-pulse generator and an endoscopic laser range-finder was designed and manufactured. We assessed the precision and accuracy of the LPEER's stimulus and range-finder according to the coefficient of variation (CV) and by looking at the differences between the measured properties and the desired values, and we performed a pilot validation on ten human subjects. The air-pulses and range-finder exhibited good precision and accuracy (CV < 0.06), with differences between the desired and measured properties at <3 % and a range-finder measurement error of <1 mm. The tests in patients demonstrated obtainable and reproducible thresholds for the laryngeal adductor, cough and gag reflexes. The new LPEER was capable of delivering precise and accurate stimuli for exploring laryngo-pharyngeal reflexes.
NASA Technical Reports Server (NTRS)
Challa, M.; Natanson, G.
1998-01-01
Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dedic, Chloe Elizabeth
Hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps CARS) is developed for measuring internal energy distributions, species concentration, and pressure for highly dynamic gas-phase environments. Systems of interest include next-generation combustors, plasma-based manufacturing and plasma-assisted combustion, and high-speed aerodynamic flow. These challenging environments include spatial variations and fast dynamics that require the spatial and temporal resolution offered by hybrid fs/ps CARS. A novel dual-pump fs/ps CARS approach is developed to simultaneously excite pure-rotational and rovibrational Raman coherences for dynamic thermometry (300-2400 K) and detection of major combustion species. This approach was also used to measure single-shot vibrational and rotational energy distributions of the nonequilibrium environment of a dielectric barrier discharge plasma. Detailed spatial distributions and shot-to-shot fluctuations of rotational and vibrational temperatures spanning 325-450 K and 1200-5000 K were recorded across the plasma and surrounding flow, and are compared to plasma emission spectroscopy measurements. Dual-pump hybrid fs/ps CARS allows for concise, kHz-rate measurements of vibrational and rotational energy distributions or temperatures at equilibrium and nonequilibrium without nonresonant wave-mixing or molecular collisional interference. Additionally, a highly transient ns laser spark is explored using CARS to measure temperature and pressure behind the shock wave and temperature of the expanding plasma kernel. Vibrational energy distributions at the exit of a microscale gaseous detonation tube are presented. Theory required to model fs/ps CARS response, including nonthermal energy distributions, is presented. The impact of nonequilibrium on measurement accuracy is explored, and a coherent line-mixing model is validated with high-pressure measurements. Temperature and pressure sensitivity are investigated for multiple measurement configurations, and accuracy and precision is quantified as a function of signal-to-noise for the fs/ps CARS system.
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
ERIC Educational Resources Information Center
Hintze, John M.; Ryan, Amanda L.; Stoner, Gary
2003-01-01
The purpose of this study was to (a) examine the concurrent validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) with the Comprehensive Test of Phonological Processing (CTOPP), and (b) explore the diagnostic accuracy of the DIBELS in predicting CTOPP performance using suggested and alternative cut-scores. Eighty-six students…
ERIC Educational Resources Information Center
Schiff, Rachel; Katzir, Tami; Shoshan, Noa
2013-01-01
The present study examined the effects of orthographic transparency on reading ability of children with dyslexia in two Hebrew scripts. The study explored the reading accuracy and speed of vowelized and unvowelized Hebrew words of fourth-grade children with dyslexia. A comparison was made to typically developing readers of two age groups: a group…
NASA Astrophysics Data System (ADS)
Call, Mitchell; Schulz, Kai G.; Carvalho, Matheus C.; Santos, Isaac R.; Maher, Damien T.
2017-03-01
A new approach to autonomously determine concentrations of dissolved inorganic carbon (DIC) and its carbon stable isotope ratio (δ13C-DIC) at high temporal resolution is presented. The simple method requires no customised design. Instead it uses two commercially available instruments currently used in aquatic carbon research. An inorganic carbon analyser utilising non-dispersive infrared detection (NDIR) is coupled to a Cavity Ring-down Spectrometer (CRDS) to determine DIC and δ13C-DIC based on the liberated CO2 from acidified aliquots of water. Using a small sample volume of 2 mL, the precision and accuracy of the new method was comparable to standard isotope ratio mass spectrometry (IRMS) methods. The system achieved a sampling resolution of 16 min, with a DIC precision of ±1.5 to 2 µmol kg-1 and δ13C-DIC precision of ±0.14 ‰ for concentrations spanning 1000 to 3600 µmol kg-1. Accuracy of 0.1 ± 0.06 ‰ for δ13C-DIC based on DIC concentrations ranging from 2000 to 2230 µmol kg-1 was achieved during a laboratory-based algal bloom experiment. The high precision data that can be autonomously obtained by the system should enable complex carbonate system questions to be explored in aquatic sciences using high-temporal-resolution observations.
Focus drive mechanism for the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Devine, E. J.; Dennis, T. B., Jr.
1977-01-01
A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explored telescope. This device is a linear drive with small (.0004 in.) and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given.
Online Knowledge-Based Model for Big Data Topic Extraction.
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
2016-01-01
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.
Field programmable gate array-assigned complex-valued computation and its limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com; Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien; Zwick, Wolfgang
We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.
The Software Design for the Wide-Field Infrared Explorer Attitude Control System
NASA Technical Reports Server (NTRS)
Anderson, Mark O.; Barnes, Kenneth C.; Melhorn, Charles M.; Phillips, Tom
1998-01-01
The Wide-Field Infrared Explorer (WIRE), currently scheduled for launch in September 1998, is the fifth of five spacecraft in the NASA/Goddard Small Explorer (SMEX) series. This paper presents the design of WIRE's Attitude Control System flight software (ACS FSW). WIRE is a momentum-biased, three-axis stabilized stellar pointer which provides high-accuracy pointing and autonomous acquisition for eight to ten stellar targets per orbit. WIRE's short mission life and limited cryogen supply motivate requirements for Sun and Earth avoidance constraints which are designed to prevent catastrophic instrument damage and to minimize the heat load on the cryostat. The FSW implements autonomous fault detection and handling (FDH) to enforce these instrument constraints and to perform several other checks which insure the safety of the spacecraft. The ACS FSW implements modules for sensor data processing, attitude determination, attitude control, guide star acquisition, actuator command generation, command/telemetry processing, and FDH. These software components are integrated with a hierarchical control mode managing module that dictates which software components are currently active. The lowest mode in the hierarchy is the 'safest' one, in the sense that it utilizes a minimal complement of sensors and actuators to keep the spacecraft in a stable configuration (power and pointing constraints are maintained). As higher modes in the hierarchy are achieved, the various software functions are activated by the mode manager, and an increasing level of attitude control accuracy is provided. If FDH detects a constraint violation or other anomaly, it triggers a safing transition to a lower control mode. The WIRE ACS FSW satisfies all target acquisition and pointing accuracy requirements, enforces all pointing constraints, provides the ground with a simple means for reconfiguring the system via table load, and meets all the demands of its real-time embedded environment (16 MHz Intel 80386 processor with 80387 coprocessor running under the VRTX operating system). The mode manager organizes and controls all the software modules used to accomplish these goals, and in particular, the FDH module is tightly coupled with the mode manager.
NASA Technical Reports Server (NTRS)
Aldrich, R. C.; Greentree, W. J.; Heller, R. C.; Norick, N. X.
1970-01-01
In October 1969, an investigation was begun near Atlanta, Georgia, to explore the possibilities of developing predictors for forest land and stand condition classifications using space photography. It has been found that forest area can be predicted with reasonable accuracy on space photographs using ocular techniques. Infrared color film is the best single multiband sensor for this purpose. Using the Apollo 9 infrared color photographs taken in March 1969 photointerpreters were able to predict forest area for small units consistently within 5 to 10 percent of ground truth. Approximately 5,000 density data points were recorded for 14 scan lines selected at random from five study blocks. The mean densities and standard deviations were computed for 13 separate land use classes. The results indicate that forest area cannot be separated from other land uses with a high degree of accuracy using optical film density alone. If, however, densities derived by introducing red, green, and blue cutoff filters in the optical system of the microdensitometer are combined with their differences and their ratios in regression analysis techniques, there is a good possibility of discriminating forest from all other classes.
NASA Astrophysics Data System (ADS)
Morsdorf, F.; Meier, E.; Koetz, B.; Nüesch, D.; Itten, K.; Allgöwer, B.
2003-04-01
The potential of airborne laserscanning for mapping forest stands has been intensively evaluated in the past few years. Algorithms deriving structural forest parameters in a stand-wise manner from laser data have been successfully implemented by a number of researchers. However, with very high point density laser (>20 points/m^2) data we pursue the approach of deriving these parameters on a single-tree basis. We explore the potential of delineating single trees from laser scanner raw data (x,y,z- triples) and validate this approach with a dataset of more than 2000 georeferenced trees, including tree height and crown diameter, gathered on a long term forest monitoring site by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). The accuracy of the laser scanner is evaluated trough 6 reference targets, being 3x3 m^2 in size and horizontally plain, for validating both the horizontal and vertical accuracy of the laser scanner by matching of triangular irregular networks (TINs). Single trees are segmented by a clustering analysis in all three coordinate dimensions and their geometric properties can then be derived directly from the tree cluster.
Fast and confident: postdicting eyewitness identification accuracy in a field study.
Sauerland, Melanie; Sporer, Siegfried L
2009-03-01
The combined postdictive value of postdecision confidence, decision time, and Remember-Know-Familiar (RKF) judgments as markers of identification accuracy was evaluated with 10 targets and 720 participants. In a pedestrian area, passers-by were asked for directions. Identifications were made from target-absent or target-present lineups. Fast (optimum time boundary at 6 seconds) and confident (optimum confidence boundary at 90%) witnesses were highly accurate, slow and nonconfident witnesses highly inaccurate. Although this combination of postdictors was clearly superior to using either postdictor by itself these combinations refer only to a subsample of choosers. Know answers were associated with higher identification performance than Familiar answers, with no difference between Remember and Know answers. The results of participants' post hoc decision time estimates paralleled those with measured decision times. To explore decision strategies of nonchoosers, three subgroups were formed according to their reasons given for rejecting the lineup. Nonchoosers indicating that the target had simply been absent made faster and more confident decisions than nonchoosers stating lack of confidence or lack of memory. There were no significant differences with regard to identification performance across nonchooser groups. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
A Two-Zone Multigrid Model for SI Engine Combustion Simulation Using Detailed Chemistry
Ge, Hai-Wen; Juneja, Harmit; Shi, Yu; ...
2010-01-01
An efficient multigrid (MG) model was implemented for spark-ignited (SI) engine combustion modeling using detailed chemistry. The model is designed to be coupled with a level-set-G-equation model for flame propagation (GAMUT combustion model) for highly efficient engine simulation. The model was explored for a gasoline direct-injection SI engine with knocking combustion. The numerical results using the MG model were compared with the results of the original GAMUT combustion model. A simpler one-zone MG model was found to be unable to reproduce the results of the original GAMUT model. However, a two-zone MG model, which treats the burned and unburned regionsmore » separately, was found to provide much better accuracy and efficiency than the one-zone MG model. Without loss in accuracy, an order of magnitude speedup was achieved in terms of CPU and wall times. To reproduce the results of the original GAMUT combustion model, either a low searching level or a procedure to exclude high-temperature computational cells from the grouping should be applied to the unburned region, which was found to be more sensitive to the combustion model details.« less
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach
Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-01
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892
Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.
Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab
2018-01-16
Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.
Investigating the sex-related geometric variation of the human cranium.
Bertsatos, Andreas; Papageorgopoulou, Christina; Valakos, Efstratios; Chovalopoulou, Maria-Eleni
2018-01-29
Accurate sexing methods are of great importance in forensic anthropology since sex assessment is among the principal tasks when examining human skeletal remains. The present study explores a novel approach in assessing the most accurate metric traits of the human cranium for sex estimation based on 80 ectocranial landmarks from 176 modern individuals of known age and sex from the Athens Collection. The purpose of the study is to identify those distance and angle measurements that can be most effectively used in sex assessment. Three-dimensional landmark coordinates were digitized with a Microscribe 3DX and analyzed in GNU Octave. An iterative linear discriminant analysis of all possible combinations of landmarks was performed for each unique set of the 3160 distances and 246,480 angles. Cross-validated correct classification as well as multivariate DFA on top performing variables reported 13 craniometric distances with over 85% classification accuracy, 7 angles over 78%, as well as certain multivariate combinations yielding over 95%. Linear regression of these variables with the centroid size was used to assess their relation to the size of the cranium. In contrast to the use of generalized procrustes analysis (GPA) and principal component analysis (PCA), which constitute the common analytical work flow for such data, our method, although computational intensive, produced easily applicable discriminant functions of high accuracy, while at the same time explored the maximum of cranial variability.
MATISSE: A novel tool to access, visualize and analyse data from planetary exploration missions
NASA Astrophysics Data System (ADS)
Zinzi, A.; Capria, M. T.; Palomba, E.; Giommi, P.; Antonelli, L. A.
2016-04-01
The increasing number and complexity of planetary exploration space missions require new tools to access, visualize and analyse data to improve their scientific return. ASI Science Data Center (ASDC) addresses this request with the web-tool MATISSE (Multi-purpose Advanced Tool for the Instruments of the Solar System Exploration), allowing the visualization of single observation or real-time computed high-order products, directly projected on the three-dimensional model of the selected target body. Using MATISSE it will be no longer needed to download huge quantity of data or to write down a specific code for every instrument analysed, greatly encouraging studies based on joint analysis of different datasets. In addition the extremely high-resolution output, to be used offline with a Python-based free software, together with the files to be read with specific GIS software, makes it a valuable tool to further process the data at the best spatial accuracy available. MATISSE modular structure permits addition of new missions or tasks and, thanks to dedicated future developments, it would be possible to make it compliant to the Planetary Virtual Observatory standards currently under definition. In this context the recent development of an interface to the NASA ODE REST API by which it is possible to access to public repositories is set.
Kato, Ryuji; Nakano, Hideo; Konishi, Hiroyuki; Kato, Katsuya; Koga, Yuchi; Yamane, Tsuneo; Kobayashi, Takeshi; Honda, Hiroyuki
2005-08-19
To engineer proteins with desirable characteristics from a naturally occurring protein, high-throughput screening (HTS) combined with directed evolutional approach is the essential technology. However, most HTS techniques are simple positive screenings. The information obtained from the positive candidates is used only as results but rarely as clues for understanding the structural rules, which may explain the protein activity. In here, we have attempted to establish a novel strategy for exploring functional proteins associated with computational analysis. As a model case, we explored lipases with inverted enantioselectivity for a substrate p-nitrophenyl 3-phenylbutyrate from the wild-type lipase of Burkhorderia cepacia KWI-56, which is originally selective for (S)-configuration of the substrate. Data from our previous work on (R)-enantioselective lipase screening were applied to fuzzy neural network (FNN), bioinformatic algorithm, to extract guidelines for screening and engineering processes to be followed. FNN has an advantageous feature of extracting hidden rules that lie between sequences of variants and their enzyme activity to gain high prediction accuracy. Without any prior knowledge, FNN predicted a rule indicating that "size at position L167," among four positions (L17, F119, L167, and L266) in the substrate binding core region, is the most influential factor for obtaining lipase with inverted (R)-enantioselectivity. Based on the guidelines obtained, newly engineered novel variants, which were not found in the actual screening, were experimentally proven to gain high (R)-enantioselectivity by engineering the size at position L167. We also designed and assayed two novel variants, namely FIGV (L17F, F119I, L167G, and L266V) and FFGI (L17F, L167G, and L266I), which were compatible with the guideline obtained from FNN analysis, and confirmed that these designed lipases could acquire high inverted enantioselectivity. The results have shown that with the aid of bioinformatic analysis, high-throughput screening can expand its potential for exploring vast combinatorial sequence spaces of proteins.
NASA Astrophysics Data System (ADS)
Rivera, Gustavo; Diamessis, Peter
2016-11-01
The shoaling of an internal solitary wave (ISW) of depression over gentle slopes is explored through fully nonlinear and non-hydrostatic simulations based on a high-accuracy deformed spectral multidomain penalty method. As recently observed in the South China Sea, in high-amplitude shoaling ISWs, the along-wave current can exceed the wave celerity resulting in convective instabilities. If the slope is less than 3%, the wave does not disintegrate as in the case of steeper slope shoaling but, instead, maintains its symmetric shape; the above convective instability may drive the formation of a turbulent recirculating core. The sensitivity of convective instabilities in an ISW is examined as a function of the bathymetric slope and wave steepness. ISWs are simulated propagating over both idealized and realistic bathymetry. Emphasis is placed on the structure of the above instabilities, the persistence of trapped cores and their potential for particle entrainment and transport. Additionally, the role of the baroclinic background current on the development of convective instabilities is explored. A preliminary understanding is obtained of the transition to turbulence within a high-amplitude ISW shoaling over progressively varying bathymetry.
Alarcón-Ríos, Lucía; Velo-Antón, Guillermo; Kaliontzopoulou, Antigoni
2017-04-01
The study of morphological variation among and within taxa can shed light on the evolution of phenotypic diversification. In the case of urodeles, the dorso-ventral view of the head captures most of the ontogenetic and evolutionary variation of the entire head, which is a structure with a high potential for being a target of selection due to its relevance in ecological and social functions. Here, we describe a non-invasive procedure of geometric morphometrics for exploring morphological variation in the external dorso-ventral view of urodeles' head. To explore the accuracy of the method and its potential for describing morphological patterns we applied it to two populations of Salamandra salamandra gallaica from NW Iberia. Using landmark-based geometric morphometrics, we detected differences in head shape between populations and sexes, and an allometric relationship between shape and size. We also determined that not all differences in head shape are due to size variation, suggesting intrinsic shape differences across sexes and populations. These morphological patterns had not been previously explored in S. salamandra, despite the high levels of intraspecific diversity within this species. The methodological procedure presented here allows to detect shape variation at a very fine scale, and solves the drawbacks of using cranial samples, thus increasing the possibilities of using collection specimens and alive animals for exploring dorsal head shape variation and its evolutionary and ecological implications in urodeles. J. Morphol. 278:475-485, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik
Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climatemore » science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less
Model-order reduction of lumped parameter systems via fractional calculus
NASA Astrophysics Data System (ADS)
Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio
2018-04-01
This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.
NASA Astrophysics Data System (ADS)
Ghassemi, Aazam; Yazdani, Mostafa; Hedayati, Mohamad
2017-12-01
In this work, based on the First Order Shear Deformation Theory (FSDT), an attempt is made to explore the applicability and accuracy of the Generalized Differential Quadrature Method (GDQM) for bending analysis of composite sandwich plates under static loading. Comparative studies of the bending behavior of composite sandwich plates are made between two types of boundary conditions for different cases. The effects of fiber orientation, ratio of thickness to length of the plate, the ratio of thickness of core to thickness of the face sheet are studied on the transverse displacement and moment resultants. As shown in this study, the role of the core thickness in deformation of these plates can be reversed by the stiffness of the core in comparison with sheets. The obtained graphs give very good results due to optimum design of sandwich plates. In Comparison with existing solutions, fast convergent rates and high accuracy results can be achieved by the GDQ method.
Gravitational-wave cosmography with LISA and the Hubble tension
NASA Astrophysics Data System (ADS)
Kyutoku, Koutarou; Seto, Naoki
2017-04-01
We propose that stellar-mass binary black holes like GW150914 will become a tool to explore the local Universe within ˜100 Mpc in the era of the Laser Interferometer Space Antenna (LISA). High calibration accuracy and annual motion of LISA could enable us to localize up to ≈60 binaries more accurately than the error volume of ≈100 Mpc3 without electromagnetic counterparts under moderately optimistic assumptions. This accuracy will give us a fair chance to determine the host object solely by gravitational waves. By combining the luminosity distance extracted from gravitational waves with the cosmological redshift determined from the host, the local value of the Hubble parameter will be determined up to a few % without relying on the empirically constructed distance ladder. Gravitational-wave cosmography would pave the way for resolution of the disputed Hubble tension, where the local and global measurements disagree in the value of the Hubble parameter at 3.4 σ level, which amounts to ≈9 %.
"Frequent frames" in German child-directed speech: a limited cue to grammatical categories.
Stumper, Barbara; Bannard, Colin; Lieven, Elena; Tomasello, Michael
2011-08-01
Mintz (2003) found that in English child-directed speech, frequently occurring frames formed by linking the preceding (A) and succeeding (B) word (A_x_B) could accurately predict the syntactic category of the intervening word (x). This has been successfully extended to French (Chemla, Mintz, Bernal, & Christophe, 2009). In this paper, we show that, as for Dutch (Erkelens, 2009), frequent frames in German do not enable such accurate lexical categorization. This can be explained by the characteristics of German including a less restricted word order compared to English or French and the frequent use of some forms as both determiner and pronoun in colloquial German. Finally, we explore the relationship between the accuracy of frames and their potential utility and find that even some of those frames showing high token-based accuracy are of limited value because they are in fact set phrases with little or no variability in the slot position. Copyright © 2011 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Wilson, J. Adam; Walton, Léo M.; Tyler, Mitch; Williams, Justin
2012-08-01
This article describes a new method of providing feedback during a brain-computer interface movement task using a non-invasive, high-resolution electrotactile vision substitution system. We compared the accuracy and movement times during a center-out cursor movement task, and found that the task performance with tactile feedback was comparable to visual feedback for 11 participants. These subjects were able to modulate the chosen BCI EEG features during both feedback modalities, indicating that the type of feedback chosen does not matter provided that the task information is clearly conveyed through the chosen medium. In addition, we tested a blind subject with the tactile feedback system, and found that the training time, accuracy, and movement times were indistinguishable from results obtained from subjects using visual feedback. We believe that BCI systems with alternative feedback pathways should be explored, allowing individuals with severe motor disabilities and accompanying reduced visual and sensory capabilities to effectively use a BCI.
A universal deep learning approach for modeling the flow of patients under different severities.
Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L
2018-02-01
The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.
Survey methods for assessing land cover map accuracy
Nusser, S.M.; Klaas, E.E.
2003-01-01
The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.
A Sounding Rocket Experiment for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)
NASA Astrophysics Data System (ADS)
Kubo, M.; Kano, R.; Kobayashi, K.; Bando, T.; Narukage, N.; Ishikawa, R.; Tsuneta, S.; Katsukawa, Y.; Ishikawa, S.; Suematsu, Y.; Hara, H.; Shimizu, T.; Sakao, T.; Ichimoto, K.; Goto, M.; Holloway, T.; Winebarger, A.; Cirtain, J.; De Pontieu, B.; Casini, R.; Auchère, F.; Trujillo Bueno, J.; Manso Sainz, R.; Belluzzi, L.; Asensio Ramos, A.; Štěpán, J.; Carlsson, M.
2014-10-01
A sounding-rocket experiment called the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) is presently under development to measure the linear polarization profiles in the hydrogen Lyman-alpha (Lyα) line at 121.567 nm. CLASP is a vacuum-UV (VUV) spectropolarimeter to aim for first detection of the linear polarizations caused by scattering processes and the Hanle effect in the Lyα line with high accuracy (0.1%). This is a fist step for exploration of magnetic fields in the upper chromosphere and transition region of the Sun. Accurate measurements of the linear polarization signals caused by scattering processes and the Hanle effect in strong UV lines like Lyα are essential to explore with future solar telescopes the strength and structures of the magnetic field in the upper chromosphere and transition region of the Sun. The CLASP proposal has been accepted by NASA in 2012, and the flight is planned in 2015.
Using convolutional neural networks to explore the microbiome.
Reiman, Derek; Metwally, Ahmed; Yang Dai
2017-07-01
The microbiome has been shown to have an impact on the development of various diseases in the host. Being able to make an accurate prediction of the phenotype of a genomic sample based on its microbial taxonomic abundance profile is an important problem for personalized medicine. In this paper, we examine the potential of using a deep learning framework, a convolutional neural network (CNN), for such a prediction. To facilitate the CNN learning, we explore the structure of abundance profiles by creating the phylogenetic tree and by designing a scheme to embed the tree to a matrix that retains the spatial relationship of nodes in the tree and their quantitative characteristics. The proposed CNN framework is highly accurate, achieving a 99.47% of accuracy based on the evaluation on a dataset 1967 samples of three phenotypes. Our result demonstrated the feasibility and promising aspect of CNN in the classification of sample phenotype.
NASA Astrophysics Data System (ADS)
Shostak, Seth
2011-02-01
While modern SETI experiments are often highly sensitive, reaching detection limits of 10 -25 W/m 2 Hz in the radio, interstellar distances imply that if extraterrestrial societies are using isotropic or broad-beamed transmitters, the power requirements for their emissions are enormous. Indeed, isotropic transmissions to the entire Galaxy, sufficiently intense to be detectable by our current searches, would consume power comparable to the stellar insolation of an Earth-size planet. In this paper we consider how knowledge can be traded for power, and how, and to what degree, astronomical accuracy can reduce the energy costs of a comprehensive transmission program by putative extraterrestrials. Indeed, an exploration of how far this trade-off might be taken suggests that extraterrestrial transmitting strategies of civilizations only modestly more advanced than our own would be, as are our SETI receiving experiments, inexpensive enough to allow multiple efforts. We explore the consequences this supposition has for our SETI listening experiments.
Exploring Capabilities of SENTINEL-2 for Vegetation Mapping Using Random Forest
NASA Astrophysics Data System (ADS)
Saini, R.; Ghosh, S. K.
2018-04-01
Accurate vegetation mapping is essential for monitoring crop and sustainable agricultural practice. This study aims to explore the capabilities of Sentinel-2 data over Landsat-8 Operational Land Imager (OLI) data for vegetation mapping. Two combination of Sentinel-2 dataset have been considered, first combination is 4-band dataset at 10m resolution which consists of NIR, R, G and B bands, while second combination is generated by stacking 4 bands having 10 m resolution along with other six sharpened bands using Gram-Schmidt algorithm. For Landsat-8 OLI dataset, six multispectral bands have been pan-sharpened to have a spatial resolution of 15 m using Gram-Schmidt algorithm. Random Forest (RF) and Maximum Likelihood classifier (MLC) have been selected for classification of images. It is found that, overall accuracy achieved by RF for 4-band, 10-band dataset of Sentinel-2 and Landsat-8 OLI are 88.38 %, 90.05 % and 86.68 % respectively. While, MLC give an overall accuracy of 85.12 %, 87.14 % and 83.56 % for 4-band, 10-band Sentinel and Landsat-8 OLI respectively. Results shown that 10-band Sentinel-2 dataset gives highest accuracy and shows a rise of 3.37 % for RF and 3.58 % for MLC compared to Landsat-8 OLI. However, all the classes show significant improvement in accuracy but a major rise in accuracy is observed for Sugarcane, Wheat and Fodder for Sentinel 10-band imagery. This study substantiates the fact that Sentinel-2 data can be utilized for mapping of vegetation with a good degree of accuracy when compared to Landsat-8 OLI specifically when objective is to map a sub class of vegetation.
Le Roux, Ronan
2015-04-01
The paper deals with the introduction of nanotechnology in biochips. Based on interviews and theoretical reflections, it explores blind spots left by technology assessment and ethical investigations. These have focused on possible consequences of increased diffusability of a diagnostic device, neglecting both the context of research as well as increased accuracy, despite it being a more essential feature of nanobiochip projects. Also, rather than one of many parallel aspects (technical, legal and social) in innovation processes, ethics is considered here as a ubiquitous system of choices between sometimes antagonistic values. Thus, the paper investigates what is at stake when accuracy is balanced with other practical values in different contexts. Dramatic nanotechnological increase of accuracy in biochips can raise ethical issues, since it is at odds with other values such as diffusability and reliability. But those issues will not be as revolutionary as is often claimed: neither in diagnostics, because accuracy of measurements is not accuracy of diagnostics; nor in research, because a boost in measurement accuracy is not sufficient to overcome significance-chasing malpractices. The conclusion extends to methodological recommendations.
Researches on the Orbit Determination and Positioning of the Chinese Lunar Exploration Program
NASA Astrophysics Data System (ADS)
Li, P. J.
2015-07-01
This dissertation studies the precise orbit determination (POD) and positioning of the Chinese lunar exploration spacecraft, emphasizing the variety of VLBI (very long baseline interferometry) technologies applied for the deep-space exploration, and their contributions to the methods and accuracies of the precise orbit determination and positioning. In summary, the main contents are as following: In this work, using the real-time data measured by the CE-2 (Chang'E-2) detector, the accuracy of orbit determination is analyzed for the domestic lunar probe under the present condition, and the role played by the VLBI tracking data is particularly reassessed through the precision orbit determination experiments for CE-2. The experiments of the short-arc orbit determination for the lunar probe show that the combination of the ranging and VLBI data with the arc of 15 minutes is able to improve the accuracy by 1-1.5 order of magnitude, compared to the cases for only using the ranging data with the arc of 3 hours. The orbital accuracy is assessed through the orbital overlapping analysis, and the results show that the VLBI data is able to contribute to the CE-2's long-arc POD especially in the along-track and orbital normal directions. For the CE-2's 100 km× 100 km lunar orbit, the position errors are better than 30 meters, and for the CE-2's 15 km× 100 km orbit, the position errors are better than 45 meters. The observational data with the delta differential one-way ranging (Δ DOR) from the CE-2's X-band monitoring and control system experimental are analyzed. It is concluded that the accuracy of Δ DOR delay is dramatically improved with the noise level better than 0.1 ns, and the systematic errors are well calibrated. Although it is unable to support the development of an independent lunar gravity model, the tracking data of CE-2 provided the evaluations of different lunar gravity models through POD, and the accuracies are examined in terms of orbit-to-orbit solution differences for several gravity models. It is found that for the 100 km× 100 km lunar orbit, with a degree and order expansion up to 165, the JPL's gravity model LP165P does not show noticeable improvement over Japan's SGM series models (100× 100), but for the 15 km× 100 km lunar orbit, a higher degree-order model can significantly improve the orbit accuracy. After accomplished its nominal mission, CE-2 launched its extended missions, which involving the L2 mission and the 4179 Toutatis mission. During the flight of the extended missions, the regime offers very little dynamics thus requires an extensive amount of time and tracking data in order to attain a solution. The overlap errors are computed, and it is indicated that the use of VLBI measurements is able to increase the accuracy and reduce the total amount of tracking time. An orbit determination method based on the polynomial fitting is proposed for the CE-3's planned lunar soft landing mission. In this method, spacecraft's dynamic modeling is not necessary, and its noise reduction is expected to be better than that of the point positioning method by making full use of all-arc observational data. The simulation experiments and real data processing showed that the optimal description of the CE-1's free-fall landing trajectory is a set of five-order polynomial functions for each of the position components as well as velocity components in J2000.0. The combination of the VLBI delay, the delay rate data, and the USB (united S-band) ranging data significantly improved the accuracy than the use of USB data alone. In order to determine the position for the CE-3's Lunar Lander, a kinematic statistical method is proposed. This method uses both ranging and VLBI measurements to the lander for a continuous arc, combing with precise knowledge about the motion of the moon as provided by planetary ephemeris, to estimate the lander's position on the lunar surface with high accuracy. Application of the lunar digital elevation model (DEM) as constraints in the lander positioning is helpful. The positioning method for the traverse of lunar rover is also investigated. The integration of delay-rate method is able to achieve higher precise positioning results than the point positioning method. This method provides a wide application of the VLBI data. In the automated sample return mission, the lunar orbit rendezvous and docking are involved. Precise orbit determination using the same-beam VLBI (SBI) measurement for two spacecraft at the same time is analyzed. The simulation results showed that the SBI data is able to improve the absolute and relative orbit accuracy for two targets by 1-2 orders of magnitude. In order to verify the simulation results and test the two-target POD software developed by SHAO (Shanghai Astronomical Observatory), the real SBI data of the SELENE (Selenological and Engineering Explorer) are processed. The POD results for the Rstar and the Vstar showed that the combination of SBI data could significantly improve the accuracy for the two spacecraft, especially for the Vstar with less ranging data, and the POD accuracy is improved by approximate one order of magnitude to the POD accuracy of the Rstar.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Adams, Robert B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Flight times and deliverable masses for electric and fusion propulsion systems are difficult to approximate. Numerical integration is required for these continuous thrust systems. Many scientists are not equipped with the tools and expertise to conduct interplanetary and interstellar trajectory analysis for their concepts. Several charts plotting the results of well-known trajectory simulation codes were developed and are contained in this paper. These charts illustrate the dependence of time of flight and payload ratio on jet power, initial mass, specific impulse and specific power. These charts are intended to be a tool by which people in the propulsion community can explore the possibilities of their propulsion system concepts. Trajectories were simulated using the tools VARITOP and IPOST. VARITOP is a well known trajectory optimization code that involves numerical integration based on calculus of variations. IPOST has several methods of trajectory simulation; the one used in this paper is Cowell's method for full integration of the equations of motion. An analytical method derived in the companion paper was also evaluated. The accuracy of this method is discussed in the paper.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
Design of a toroidal device with a high temperature superconductor coil for non-neutral plasma trap
NASA Astrophysics Data System (ADS)
Ogawa, Yuichi; Morikawa, Junji; Nihei, Hitoshi; Ozawa, Daisaku; Yoshida, Zensho; Mito, Toshiyuki; Yanagi, Nagato; Iwakuma, Masataka
2002-01-01
The non-neutral plasma confinement device with a floating internal coil is under construction, where the high temperature superconductor (HTS) Ag-sheathed BSCCO-2223 is employed as the floating coil. We have two topics with this device: one is a trap of a non-neutral plasma consisting of one species, and another is an exploration of a high beta plasma based on two-fluid MHD relaxation theory. In the latter case the plasma should be non-neutralized in order to drive the plasma flow in the toroidal direction. The expected plasma parameters are discussed. Key elements of engineering issues have already developed. In addition, we have fabricated a small HTS coil and succeeded in levitating it within an accuracy of 25˜30 μm for 4 min or more.
Researches on High Accuracy Prediction Methods of Earth Orientation Parameters
NASA Astrophysics Data System (ADS)
Xu, X. Q.
2015-09-01
The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients respectively, which are used to improve/re-evaluate the AR model. Comparing to the single AR model, the AR+Kalman method performs better in the prediction of UT1-UTC and ΔLOD, and the improvement in the prediction of the polar motion is significant. (3) Following the successful Earth Orientation Parameter Prediction Comparison Campaign (EOP PCC), the Earth Orientation Parameter Combination of Prediction Pilot Project (EOPC PPP) was sponsored in 2010. As one of the participants from China, we update and submit the short- and medium-term (1 to 90 days) EOP predictions every day. From the current comparative statistics, our prediction accuracy is on the medium international level. We will carry out more innovative researches to improve the EOP forecast accuracy and enhance our level in EOP forecast.
NASA Astrophysics Data System (ADS)
Erener, A.
2013-04-01
Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all kinds of challenges, such as high dense build up areas, regions with bare soil, and small and large buildings with different rooftops, such as concrete, brick, and metal. Using the pixel based accuracy assessment it was shown that the percent building detection (PBD) and quality percent (QP) of the MLC and SVM depend on the complexity and texture variation of the region. Generally, PBD values range between 70% and 90% for the MLC and SVM, respectively. No substantial improvements were observed when the SVM and MLC classifications were developed by the addition of more variables, instead of the use of only four bands. In the evaluation of object based accuracy assessment, it was demonstrated that while MLC and SVM provide higher rates of correct detection, they also provide higher rates of false alarms.
Competency-based assessment in surgeon-performed head and neck ultrasonography: A validity study.
Todsen, Tobias; Melchiors, Jacob; Charabi, Birgitte; Henriksen, Birthe; Ringsted, Charlotte; Konge, Lars; von Buchwald, Christian
2018-06-01
Head and neck ultrasonography (HNUS) increasingly is used as a point-of-care diagnostic tool by otolaryngologists. However, ultrasonography (US) is a very operator-dependent image modality. Hence, this study aimed to explore the diagnostic accuracy of surgeon-performed HNUS and to establish validity evidence for an objective structured assessment of ultrasound skills (OSAUS) used for competency-based assessment. A prospective experimental study. Six otolaryngologists and 11 US novices were included in a standardized test setup for which they had to perform focused HNUS of eight patients suspected for different head and neck lesions. Their diagnostic accuracy was calculated based on the US reports, and two blinded raters assessed the video-recorded US performance using the OSAUS scale. The otolaryngologists obtained a high diagnostic accuracy on 88% (range 63%-100%) compared to the US novices on 38% (range 0-63%); P < 0.001. The OSAUS score demonstrated good inter-case reliability (0.85) and inter-rater reliability (0.76), and significant discrimination between otolaryngologist and US novices; P < 0.001. A strong correlation between the OSAUS score and the diagnostic accuracy was found (Spearman's ρ, 0.85; P < P 0.001), and a pass/fail score was established at 2.8. Strong validity evidence supported the use of the OSAUS scale to assess HNUS competence with good reliability, significant discrimination between US competence levels, and a strong correlation of assessment score to diagnostic accuracy. An OSAUS pass/fail score was established and could be used for competence-based assessment in surgeon-performed HNUS. NA. Laryngoscope, 128:1346-1352, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Modeling diurnal land temperature cycles over Los Angeles using downscaled GOES imagery
NASA Astrophysics Data System (ADS)
Weng, Qihao; Fu, Peng
2014-11-01
Land surface temperature is a key parameter for monitoring urban heat islands, assessing heat related risks, and estimating building energy consumption. These environmental issues are characterized by high temporal variability. A possible solution from the remote sensing perspective is to utilize geostationary satellites images, for instance, images from Geostationary Operational Environmental System (GOES) and Meteosat Second Generation (MSG). These satellite systems, however, with coarse spatial but high temporal resolution (sub-hourly imagery at 3-10 km resolution), often limit their usage to meteorological forecasting and global climate modeling. Therefore, how to develop efficient and effective methods to disaggregate these coarse resolution images to a proper scale suitable for regional and local studies need be explored. In this study, we propose a least square support vector machine (LSSVM) method to achieve the goal of downscaling of GOES image data to half-hourly 1-km LSTs by fusing it with MODIS data products and Shuttle Radar Topography Mission (SRTM) digital elevation data. The result of downscaling suggests that the proposed method successfully disaggregated GOES images to half-hourly 1-km LSTs with accuracy of approximately 2.5 K when validated against with MODIS LSTs at the same over-passing time. The synthetic LST datasets were further explored for monitoring of surface urban heat island (UHI) in the Los Angeles region by extracting key diurnal temperature cycle (DTC) parameters. It is found that the datasets and DTC derived parameters were more suitable for monitoring of daytime- other than nighttime-UHI. With the downscaled GOES 1-km LSTs, the diurnal temperature variations can well be characterized. An accuracy of about 2.5 K was achieved in terms of the fitted results at both 1 km and 5 km resolutions.
JASMINE: galactic structure surveyor
NASA Astrophysics Data System (ADS)
Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Yano, Taihei; Tsujimoto, Takuji; Suganuma, Masahiro; Niwa, Yoshito; Yamauchi, Masahiro; Kawakatsu, Yasuhiro; Matsuhara, Hideo; Noda, Atsushi; Tsuiki, Atsuo; Utashima, Masayoshi; Ogawa, Akira
2006-06-01
We introduce a Japanese plan of infrared(z-band:0.9μm) space astrometry(JASMINE-project). JASMINE is the satellite (Japan Astrometry Satellite Mission for INfrared Exploration) which will measure distances and apparent motions of stars around the center of the Milky Way with yet unprecedented precision. It will measure parallaxes, positions with the accuracy of 10 micro-arcsec and proper motions with the accuracy of ~ 4microarcsec/ year for stars brighter than z=14mag. JASMINE can observe about ten million stars belonging to the bulge components of our Galaxy, which are hidden by the interstellar dust extinction in optical bands. Number of stars with σ/π < 0.1 in the direction of the Galactic central bulge is about 1000 times larger than those observed in optical bands, where π is a parallax and σ is an error of the parallax. With the completely new "map of the bulge in the Milky Way", it is expected that many new exciting scientific results will be obtained in various fields of astronomy. Presently, JASMINE is in a development phase, with a target launch date around 2015. We adopt the following instrument design of JASMINE in order to get the accurate positions of many stars. A 3-mirrors optical system(modified Korsch system)with a primary mirror of~ 0.85m is one of the candidate for the optical system. On the astro-focal plane, we put dozens of new type of CCDs for z-band to get a wide field of view. The accurate measurements of the astrometric parameters requires the instrument line-of-sight highly stability and the opto-mechanical highly stability of the payload in the JASMINE spacecraft. The consideration of overall system(bus) design is now going on in cooperation with Japan Aerospace Exploration Agency(JAXA).
Thermal conductivity of high purity synthetic single crystal diamonds
NASA Astrophysics Data System (ADS)
Inyushkin, A. V.; Taldenkov, A. N.; Ralchenko, V. G.; Bolshakov, A. P.; Koliadin, A. V.; Katrusha, A. N.
2018-04-01
Thermal conductivity of three high purity synthetic single crystalline diamonds has been measured with high accuracy at temperatures from 6 to 410 K. The crystals grown by chemical vapor deposition and by high-pressure high-temperature technique demonstrate almost identical temperature dependencies κ (T ) and high values of thermal conductivity, up to 24 W cm-1K-1 at room temperature. At conductivity maximum near 63 K, the magnitude of thermal conductivity reaches 285 W cm-1K-1 , the highest value ever measured for diamonds with the natural carbon isotope composition. Experimental data were fitted with the classical Callaway model for the lattice thermal conductivity. A set of expressions for the anharmonic phonon scattering processes (normal and umklapp) has been proposed which gives an excellent fit to the experimental κ (T ) data over almost the whole temperature range explored. The model provides the strong isotope effect, nearly 45%, and the high thermal conductivity (>24 W cm-1K-1 ) for the defect-free diamond with the natural isotopic abundance at room temperature.
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
JASMINE: constructor of the dynamical structure of the Galactic bulge
NASA Astrophysics Data System (ADS)
Gouda, N.; Kobayashi, Y.; Yamada, Y.; Yano, T.; Tsujimoto, T.; Suganuma, M.; Niwa, Y.; Yamauchi, M.
2008-07-01
We introduce a Japanese space astrometry project which is called JASMINE. JASMINE (Japan Astrometry Satellite Mission for INfrared Exploration) will measure distances and tangential motions of stars in the Galactic bulge with yet unprecedented precision. JASMINE will operate in z-band whose central wavelength is 0.9 micron. It will measure parallaxes, positions with accuracy of about 10 micro-arcsec and proper motions with accuracy of about 10 micro- arcsec/year for the stars brighter than z=14 mag. The number of stars observed by JASMINE with high accuracy of parallaxes in the Galactic bulge is much larger than that observed in other space astrometry projects operating in optical bands. With the completely new “map of the Galactic bulge” including motions of bulge stars, we expect that many new exciting scientific results will be obtained in studies of the Galactic bulge. One of them is the construction of the dynamical structure of the Galactic bulge. Kinematics and distance data given by JASMINE are the closest approach to a view of the exact dynamical structure of the Galactic bulge. Presently, JASMINE is in a development phase, with a target launch date around 2016. We comment on the outline of JASMINE mission, scientific targets and a preliminary design of JASMINE in this paper.
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
Variation and Likeness in Ambient Artistic Portraiture.
Hayes, Susan; Rheinberger, Nick; Powley, Meagan; Rawnsley, Tricia; Brown, Linda; Brown, Malcolm; Butler, Karen; Clarke, Ann; Crichton, Stephen; Henderson, Maggie; McCosker, Helen; Musgrave, Ann; Wilcock, Joyce; Williams, Darren; Yeaman, Karin; Zaracostas, T S; Taylor, Adam C; Wallace, Gordon
2018-06-01
An artist-led exploration of portrait accuracy and likeness involved 12 Artists producing 12 portraits referencing a life-size 3D print of the same Sitter. The works were assessed during a public exhibition, and the resulting likeness assessments were compared to portrait accuracy as measured using geometric morphometrics (statistical shape analysis). Our results are that, independently of the assessors' prior familiarity with the Sitter's face, the likeness judgements tended to be higher for less morphologically accurate portraits. The two highest rated were the portrait that most exaggerated the Sitter's distinctive features, and a portrait that was a more accurate (but not the most accurate) depiction. In keeping with research showing photograph likeness assessments involve recognition, we found familiar assessors rated the two highest ranked portraits even higher than those with some or no familiarity. In contrast, those lacking prior familiarity with the Sitter's face showed greater favour for the portrait with the highest morphological accuracy, and therefore most likely engaged in face-matching with the exhibited 3D print. Furthermore, our research indicates that abstraction in portraiture may not enhance likeness, and we found that when our 12 highly diverse portraits were statistically averaged, this resulted in a portrait that is more morphologically accurate than any of the individual artworks comprising the average.
Obert, Martin; Kubelt, Carolin; Schaaf, Thomas; Dassinger, Benjamin; Grams, Astrid; Gizewski, Elke R; Krombach, Gabriele A; Verhoff, Marcel A
2013-05-10
The objective of this article was to explore age-at-death estimates in forensic medicine, which were methodically based on age-dependent, radiologically defined bone-density (HC) decay and which were investigated with a standard clinical computed tomography (CT) system. Such density decay was formerly discovered with a high-resolution flat-panel CT in the skulls of adult females. The development of a standard CT methodology for age estimations--with thousands of installations--would have the advantage of being applicable everywhere, whereas only few flat-panel prototype CT systems are in use worldwide. A Multi-Slice CT scanner (MSCT) was used to obtain 22,773 images from 173 European human skulls (89 male, 84 female), taken from a population of patients from the Department of Neuroradiology at the University Hospital Giessen and Marburg during 2010 and 2011. An automated image analysis was carried out to evaluate HC of all images. The age dependence of HC was studied by correlation analysis. The prediction accuracy of age-at-death estimates was calculated. Computer simulations were carried out to explore the influence of noise on the accuracy of age predictions. Human skull HC values strongly scatter as a function of age for both sexes. Adult male skull bone-density remains constant during lifetime. Adult female HC decays during lifetime, as indicated by a correlation coefficient (CC) of -0.53. Prediction errors for age-at-death estimates for both of the used scanners are in the range of ±18 years at a 75% confidence interval (CI). Computer simulations indicate that this is the best that can be expected for such noisy data. Our results indicate that HC-decay is indeed present in adult females and that it can be demonstrated both by standard and by high-resolution CT methods, applied to different subject groups of an identical population. The weak correlation between HC and age found by both CT methods only enables a method to estimate age-at-death with limited practical relevance since the errors of the estimates are large. Computer simulations clearly indicate that data with less noise and CCs in the order of -0.97 or less would be necessary to enable age-at-death estimates with an accuracy of ±5 years at a 75% CI. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Starkey-Perret, Rebecca; Belan, Sophie; Lê Ngo, Thi Phuong; Rialland, Guillaume
2017-01-01
This chapter presents and discusses the results of a large-scale pilot study carried out in the context of a task-based, blended-learning Business English programme in the Foreign Languages and International Trade department of a French University . It seeks to explore the effects of pre-task planned Focus on Form (FonF) on accuracy in students'…
Libration Point Navigation Concepts Supporting the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Quinn, David A.
2004-01-01
This work examines the autonomous navigation accuracy achievable for a lunar exploration trajectory from a translunar libration point lunar navigation relay satellite, augmented by signals from the Global Positioning System (GPS). We also provide a brief analysis comparing the libration point relay to lunar orbit relay architectures, and discuss some issues of GPS usage for cis-lunar trajectories.
Development of a Coherent Lidar for Aiding Precision Soft Landing on Planetary Bodies
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Pierrottet, Diego; Tolson, Robert H.; Powell, Richard W.; Davidson, John B.; Peri, Frank
2005-01-01
Coherent lidar can play a critical role in future planetary exploration missions by providing key guidance, navigation, and control (GNC) data necessary for navigating planetary landers to the pre-selected site and achieving autonomous safe soft-landing. Although the landing accuracy has steadily improved over time to approximately 35 km for the recent Mars Exploration Rovers due to better approach navigation, a drastically different guidance, navigation and control concept is required to meet future mission requirements. For example, future rovers will require better than 6 km landing accuracy for Mars and better than 1 km for the Moon plus maneuvering capability to avoid hazardous terrain features. For this purpose, an all-fiber coherent lidar is being developed to address the call for advancement of entry, descent, and landing technologies. This lidar will be capable of providing precision range to the ground and approach velocity data, and in the case of landing on Mars, it will also measure the atmospheric wind and density. The lidar obtains high resolution range information from a frequency modulated-continuous wave (FM-CW) laser beam whose instantaneous frequency varies linearly with time, and the ground vector velocity is directly extracted from the Doppler frequency shift. Utilizing the high concentration of aerosols in the Mars atmosphere (approx. two order of magnitude higher than the Earth), the lidar can measure wind velocity with a few watts of optical power. Operating in 1.57 micron wavelength regime, the lidar can use the differential absorption (DIAL) technique to measure the average CO2 concentration along the laser beam using, that is directly proportional to the Martian atmospheric density. Employing fiber optics components allows for the lidar multi-functional operation while facilitating a highly efficient, compact and reliable design suitable for integration into a spacecraft with limited mass, size, and power resources.
Validation of maternal reports for low birthweight and preterm birth indicators in rural Nepal.
Chang, Karen T; Mullany, Luke C; Khatry, Subarna K; LeClerq, Steven C; Munos, Melinda K; Katz, Joanne
2018-06-01
Tracking progress towards global newborn health targets depends largely on maternal reported data collected through large, nationally representative surveys. We evaluated the validity, across a range of recall period lengths (1 to 24 months post-delivery), of maternal report of birthweight, birth size and length of pregnancy. We compared maternal reports to reference standards of birthweights measured within 72 hours of delivery and gestational age generated from reported first day of the last menstrual period (LMP) prospectively collected as part of a population-based study (n = 1502). We calculated sensitivity, specificity, area the under the receiver operating curve (AUC) as a measure of individual-level accuracy, and the inflation factor (IF) to quantify population-level bias for each indicator. We assessed if length of recall period modified accuracy by stratifying measurements across time bins and using a modified Poisson regression with robust error variance to estimate the relative risk (RR) of correctly classifying newborns as low birthweight (LBW) or preterm, adjusting for child sex, place of delivery, maternal age, maternal education, parity, and ethnicity. The LBW indicator using maternally reported birthweight in grams had low individual-level accuracy (AUC = 0.69) and high population-level bias (inflation factor IF = 0.62). LBW using maternally reported birth size and the preterm birth indicator had lower individual-level accuracy (AUC = 0.58 and 0.56, respectively) and higher population-level bias (IF = 0.28 and 0.35, respectively) up to 24 months following birth. Length of recall time did not affect accuracy of LBW indicators. For the preterm birth indicator, accuracy did not change with length of recall up to 20 months after birth and improved slightly beyond 20 months. The use of maternal reports may underestimate and bias indicators for LBW and preterm birth. In settings with high prevalence of LBW and preterm births, these indicators generated from maternal reports may be more vulnerable to misclassification. In populations where an important proportion of births occur at home or where weight is not routinely measured, mothers perhaps place less importance on remembering size at birth. Further work is needed to explore whether these conclusions on the validity of maternal reports hold in similar rural and low-income settings.
Establishment of National Gravity Base Network of Iran
NASA Astrophysics Data System (ADS)
Hatam Chavari, Y.; Bayer, R.; Hinderer, J.; Ghazavi, K.; Sedighi, M.; Luck, B.; Djamour, Y.; Le Moign, N.; Saadat, R.; Cheraghi, H.
2009-04-01
A gravity base network is supposed to be a set of benchmarks uniformly distributed across the country and the absolute gravity values at the benchmarks are known to the best accessible accuracy. The gravity at the benchmark stations are either measured directly with absolute devices or transferred by gravity difference measurements by gravimeters from known stations. To decrease the accumulation of random measuring errors arising from these transfers, the number of base stations distributed across the country should be as small as possible. This is feasible if the stations are selected near to the national airports long distances apart but faster accessible and measurable by a gravimeter carried in an airplane between the stations. To realize the importance of such a network, various applications of a gravity base network are firstly reviewed. A gravity base network is the required reference frame for establishing 1st , 2nd and 3rd order gravity networks. Such a gravity network is used for the following purposes: a. Mapping of the structure of upper crust in geology maps. The required accuracy for the measured gravity values is about 0.2 to 0.4 mGal. b. Oil and mineral explorations. The required accuracy for the measured gravity values is about 5 µGal. c. Geotechnical studies in mining areas for exploring the underground cavities as well as archeological studies. The required accuracy is about 5 µGal and better. d. Subsurface water resource explorations and mapping crustal layers which absorb it. An accuracy of the same level of previous applications is required here too. e. Studying the tectonics of the Earth's crust. Repeated precise gravity measurements at the gravity network stations can assist us in identifying systematic height changes. The accuracy of the order of 5 µGal and more is required. f. Studying volcanoes and their evolution. Repeated precise gravity measurements at the gravity network stations can provide valuable information on the gradual upward movement of lava. g. Producing precise mean gravity anomaly for precise geoid determination. Replacing precise spirit leveling by the GPS leveling using precise geoid model is one of the forth coming application of the precise geoid. A gravity base network of 28 stations established over Iran. The stations were built mainly at bedrocks. All stations were measured by an FG5 absolute gravimeter, at least 12 hours at each station, to obtain an accuracy of a few micro gals. Several stations were repeated several times during recent years to estimate the gravity changes.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
Accuracy assessment of NLCD 2006 land cover and impervious surface
Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.
2013-01-01
Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.
Ferraro, Jeffrey P; Daumé, Hal; Duvall, Scott L; Chapman, Wendy W; Harkema, Henk; Haug, Peter J
2013-01-01
Natural language processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. The evaluated POS taggers drop in accuracy by 8.5-15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3-91.0% on clinical texts. ClinAdapt reports 93.2-93.9%. ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks.
Liebenberg, Leandi; L'Abbé, Ericka N; Stull, Kyra E
2015-12-01
The cranium is widely recognized as the most important skeletal element to use when evaluating population differences and estimating ancestry. However, the cranium is not always intact or available for analysis, which emphasizes the need for postcranial alternatives. The purpose of this study was to quantify postcraniometric differences among South Africans that can be used to estimate ancestry. Thirty-nine standard measurements from 11 postcranial bones were collected from 360 modern black, white and coloured South Africans; the sex and ancestry distribution were equal. Group differences were explored with analysis of variance (ANOVA) and Tukey's honestly significant difference (HSD) test. Linear and flexible discriminant analysis (LDA and FDA, respectively) were conducted with bone models as well as numerous multivariate subsets to identify the model and method that yielded the highest correct classifications. Leave-one-out (LDA) and k-fold (k=10; FDA) cross-validation with equal priors were used for all models. ANOVA and Tukey's HSD results reveal statistically significant differences between at least two of the three groups for the majority of the variables, with varying degrees of group overlap. Bone models, which consisted of all measurements per bone, resulted in low accuracies that ranged from 46% to 63% (LDA) and 41% to 66% (FDA). In contrast, the multivariate subsets, which consisted of different variable combinations from all elements, achieved accuracies as high as 85% (LDA) and 87% (FDA). Thus, when using a multivariate approach, the postcranial skeleton can distinguish among three modern South African groups with high accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment
NASA Astrophysics Data System (ADS)
Wu, Qingbo; Li, Hongliang; Meng, Fanman; Ngan, King N.
2018-05-01
In the field of objective image quality assessment (IQA), the Spearman's $\\rho$ and Kendall's $\\tau$ are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating `valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.
NASA Astrophysics Data System (ADS)
Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun
2014-03-01
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
NASA Astrophysics Data System (ADS)
Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed
2017-02-01
A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.
Online Knowledge-Based Model for Big Data Topic Extraction
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
2016-01-01
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004
Design of testbed and emulation tools
NASA Technical Reports Server (NTRS)
Lundstrom, S. F.; Flynn, M. J.
1986-01-01
The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.
Focus drive mechanism for the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Devine, E. J.; Dennis, T. B., Jr.
1977-01-01
A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explorer telescope. This device is a linear drive with small and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given. Results of qualification and acceptance testing, are included.
In-Situ Silver Acetylide Silver Nitrate Explosive Deposition Measurements Using X-Ray Fluorescence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covert, Timothy Todd
2014-09-01
The Light Initiated High Explosive facility utilized a spray deposited coating of silver acetylide - silver nitrate explosive to impart a mechanical shock into targets of interest. A diagnostic was required to measure the explosive deposition in - situ. An X - ray fluorescence spectrometer was deployed at the facility. A measurement methodology was developed to measure the explosive quantity with sufficient accuracy. Through the use of a tin reference material under the silver based explosive, a field calibration relationship has been developed with a standard deviation of 3.2 % . The effect of the inserted tin material into themore » experiment configuration has been explored.« less
Quantifying the effect of 3D spatial resolution on the accuracy of microstructural distributions
NASA Astrophysics Data System (ADS)
Loughnane, Gregory; Groeber, Michael; Uchic, Michael; Riley, Matthew; Shah, Megna; Srinivasan, Raghavan; Grandhi, Ramana
The choice of spatial resolution for experimentally-collected 3D microstructural data is often governed by general rules of thumb. For example, serial section experiments often strive to collect at least ten sections through the average feature-of-interest. However, the desire to collect high resolution data in 3D is greatly tempered by the exponential growth in collection times and data storage requirements. This paper explores the use of systematic down-sampling of synthetically-generated grain microstructures to examine the effect of resolution on the calculated distributions of microstructural descriptors such as grain size, number of nearest neighbors, aspect ratio, and Ω3.
Paxman, Rosemary; Stinson, Jake; Dejardin, Anna; McKendry, Rachel A.; Hoogenboom, Bart W.
2012-01-01
Micromechanic resonators provide a small-volume and potentially high-throughput method to determine rheological properties of fluids. Here we explore the accuracy in measuring mass density and viscosity of ethanol-water and glycerol-water model solutions, using a simple and easily implemented model to deduce the hydrodynamic effects on resonating cantilevers of various length-to-width aspect ratios. We next show that these measurements can be extended to determine the alcohol percentage of both model solutions and commercial beverages such as beer, wine and liquor. This demonstrates how micromechanical resonators can be used for quality control of every-day drinks. PMID:22778654
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy
Tessler, Morgan P.
2016-01-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. PMID:27221370
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Yeon Soo; Jeong, G. Y.; Sohn, D. -S.
U-Mo/Al dispersion fuel is currently under development in the DOE’s Material Management and Minimization program to convert HEU-fueled research reactors to LEU-fueled reactors. In some demanding conditions in high-power and high-performance reactors, large pores form in the interaction layers between the U-Mo fuel particles and the Al matrix, which pose a potential to cause fuel failure. In this study, comprehension of the formation and growth of these pores was explored. As a product, a model to predict pore growth and porosity increase was developed. Well-characterized in-pile data from reduced-size plates were used to fit the model parameters. A data setmore » of full-sized plates, independent and distinctively different from those used to fit the model parameters, was used to examine the accuracy of the model.« less
Determination of the Deacetylation Degree of Chitooligosaccharides
Fu, Chuhan; Wu, Sihui; Liu, Guihua; Guo, Jiao; Su, Zhengquan
2017-01-01
The methods for determination of chitosan content recommended in the Chinese Pharmacopoeia and the European Pharmacopoeia are not applicable for evaluation of the extent of deacetylation (deacetylation degree, DD) in chitooligosaccharides (COS). This study explores two different methods for assessment of DD in COS having relatively high and low molecular weights: an acid-base titration with bromocresol green indicator and a first order derivative UV spectrophotometric method for assessment of DD in COS. The accuracy of both methods as a function of molecular weight was also investigated and compared to results obtained using 1H NMR spectroscopy. Our study demonstrates two simple, fast, widely adaptable, highly precise, accurate, and inexpensive methods for the effective determination of DD in COS, which have the potential for widespread commercial applications in developing country. PMID:29068401
Farré, M; Picó, Y; Barceló, D
2014-02-07
The analysis of pesticides residues using a last generation high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometer (LTQ-Orbitrap-MS) was explored. Pesticides were extracted from fruits, fish, bees and sediments by QuEChERS and from water by solid-phase with Oasis HLB cartridges. Ultra-high pressure liquid chromatography (UHPLC)-LTQ-Orbitrap mass spectrometer acquired full scan MS data for quantification, and data dependent (dd) MS(2) and MS(3) product ion spectra for identification and/or confirmation. The regression coefficients (r(2)) for the calibration curves (two order of magnitude up to the lowest calibration level) in the study were ≥0.99. The LODs for 54 validated compounds were ≤2ngmL(-1) (analytical standards). The relative standard deviation (RSD), which was used to estimate precision, was always lower than 22%. The recovery of extraction and matrix effects ranged from 58 to 120% and from -92 to 52%, respectively. Mass accuracy was always ≤4ppm, corresponding to a maximum mass error of 1.6millimass units (mmu). This procedure was then successfully applied to pesticide residues in a set of the above-mentioned food and environmental samples. In addition to target analytes, this method enables the simultaneous detection/identification of non-target pesticides, pharmaceuticals, drugs of abuse, mycotoxins, and their metabolites. Copyright © 2013 Elsevier B.V. All rights reserved.
Testing the accuracy of clustering redshifts with simulations
NASA Astrophysics Data System (ADS)
Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.
2018-03-01
We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.
Masdrakis, Vasilios G; Legaki, Emilia-Maria; Vaidakis, Nikolaos; Ploumpidis, Dimitrios; Soldatos, Constantin R; Papageorgiou, Charalambos; Papadimitriou, George N; Oulis, Panagiotis
2015-07-01
Increased heartbeat perception accuracy (HBP-accuracy) may contribute to the pathogenesis of Panic Disorder (PD) without or with Agoraphobia (PDA). Extant research suggests that HBP-accuracy is a rather stable individual characteristic, moreover predictive of worse long-term outcome in PD/PDA patients. However, it remains still unexplored whether HBP-accuracy adversely affects patients' short-term outcome after structured cognitive behaviour therapy (CBT) for PD/PDA. To explore the potential association between HBP-accuracy and the short-term outcome of a structured brief-CBT for the acute treatment of PDA. We assessed baseline HBP-accuracy using the "mental tracking" paradigm in 25 consecutive medication-free, CBT-naive PDA patients. Patients then underwent a structured, protocol-based, 8-session CBT by the same therapist. Outcome measures included the number of panic attacks during the past week, the Agoraphobic Cognitions Questionnaire (ACQ), and the Mobility Inventory-Alone subscale (MI-alone). No association emerged between baseline HBP-accuracy and posttreatment changes concerning number of panic attacks. Moreover, higher baseline HBP-accuracy was associated with significantly larger reductions in the scores of the ACQ and the MI-alone scales. Our results suggest that in PDA patients undergoing structured brief-CBT for the acute treatment of their symptoms, higher baseline HBP-accuracy is not associated with worse short-term outcome concerning panic attacks. Furthermore, higher baseline HBP-accuracy may be associated with enhanced therapeutic gains in agoraphobic cognitions and behaviours.
Saygin, Z M; Kliemann, D; Iglesias, J E; van der Kouwe, A J W; Boyd, E; Reuter, M; Stevens, A; Van Leemput, K; McKee, A; Frosch, M P; Fischl, B; Augustinack, J C
2017-07-15
The amygdala is composed of multiple nuclei with unique functions and connections in the limbic system and to the rest of the brain. However, standard in vivo neuroimaging tools to automatically delineate the amygdala into its multiple nuclei are still rare. By scanning postmortem specimens at high resolution (100-150µm) at 7T field strength (n = 10), we were able to visualize and label nine amygdala nuclei (anterior amygdaloid, cortico-amygdaloid transition area; basal, lateral, accessory basal, central, cortical medial, paralaminar nuclei). We created an atlas from these labels using a recently developed atlas building algorithm based on Bayesian inference. This atlas, which will be released as part of FreeSurfer, can be used to automatically segment nine amygdala nuclei from a standard resolution structural MR image. We applied this atlas to two publicly available datasets (ADNI and ABIDE) with standard resolution T1 data, used individual volumetric data of the amygdala nuclei as the measure and found that our atlas i) discriminates between Alzheimer's disease participants and age-matched control participants with 84% accuracy (AUC=0.915), and ii) discriminates between individuals with autism and age-, sex- and IQ-matched neurotypically developed control participants with 59.5% accuracy (AUC=0.59). For both datasets, the new ex vivo atlas significantly outperformed (all p < .05) estimations of the whole amygdala derived from the segmentation in FreeSurfer 5.1 (ADNI: 75%, ABIDE: 54% accuracy), as well as classification based on whole amygdala volume (using the sum of all amygdala nuclei volumes; ADNI: 81%, ABIDE: 55% accuracy). This new atlas and the segmentation tools that utilize it will provide neuroimaging researchers with the ability to explore the function and connectivity of the human amygdala nuclei with unprecedented detail in healthy adults as well as those with neurodevelopmental and neurodegenerative disorders. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhu, Hao; Rusyn, Ivan; Richard, Ann; Tropsha, Alexander
2008-01-01
Background To develop efficient approaches for rapid evaluation of chemical toxicity and human health risk of environmental compounds, the National Toxicology Program (NTP) in collaboration with the National Center for Chemical Genomics has initiated a project on high-throughput screening (HTS) of environmental chemicals. The first HTS results for a set of 1,408 compounds tested for their effects on cell viability in six different cell lines have recently become available via PubChem. Objectives We have explored these data in terms of their utility for predicting adverse health effects of the environmental agents. Methods and results Initially, the classification k nearest neighbor (kNN) quantitative structure–activity relationship (QSAR) modeling method was applied to the HTS data only, for a curated data set of 384 compounds. The resulting models had prediction accuracies for training, test (containing 275 compounds together), and external validation (109 compounds) sets as high as 89%, 71%, and 74%, respectively. We then asked if HTS results could be of value in predicting rodent carcinogenicity. We identified 383 compounds for which data were available from both the Berkeley Carcinogenic Potency Database and NTP–HTS studies. We found that compounds classified by HTS as “actives” in at least one cell line were likely to be rodent carcinogens (sensitivity 77%); however, HTS “inactives” were far less informative (specificity 46%). Using chemical descriptors only, kNN QSAR modeling resulted in 62.3% prediction accuracy for rodent carcinogenicity applied to this data set. Importantly, the prediction accuracy of the model was significantly improved (72.7%) when chemical descriptors were augmented by HTS data, which were regarded as biological descriptors. Conclusions Our studies suggest that combining NTP–HTS profiles with conventional chemical descriptors could considerably improve the predictive power of computational approaches in toxicology. PMID:18414635
"Application of Tunable Diode Laser Spectrometry to Isotopic Studies for Exobiology"
NASA Technical Reports Server (NTRS)
Sauke, Todd B.
1999-01-01
Computer-controlled electrically-activated valves for rapid gas-handling have been incorporated into the Stable Isotope Laser Spectrometer (SILS) which now permits rapid filling and evacuating of the sample and reference gas cells, Experimental protocols have been developed to take advantage of the fast gas handling capabilities of the instrument and to achieve increased accuracy which results from reduced instrumental drift during rapid isotopic ratio measurements. Using these protocols' accuracies of 0.5 del (0.05%) have been achieved in measurements of 13C/12C in carbon dioxide. Using the small stable isotope laser spectrometer developed in a related PIDDP project of the Co-I, protocols for acquisition of rapid sequential calibration spectra were developed which resulted in 0.5 del accuracy also being achieved in this less complex instrument. An initial version of software for automatic characterization of tunable diode lasers has been developed and diodes have been characterized in order to establish their spectral output properties. A new state-of-the-art high operating temperature (200 K) mid infrared diode laser was purchased (through NASA procurement) and characterized. A thermo-electrically cooled mid infrared tunable diode laser system for use with high temperature operation lasers was developed. In addition to isotopic ratio measurements of carbon and oxygen, measurements of a third biologically important element (15N/14N in N2O gas) have been achieved to a preliminary accuracy of about 0.2%. Transfer of the basic SILS technology to the commercial sector is proceeding under an unfunded Space Act Agreement between NASA and SpiraMed, a medical diagnostic instrument company. Two patents have been issued. Foreign patents based on these two US patents have been applied for and are expected to be issued. A preliminary design was developed for a thermo-electrically cooled SILS instruments for application to planetary space flight exploration missions.
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
Luu, Hung N; Dahlstrom, Kristina R; Mullen, Patricia Dolan; VonVille, Helena M; Scheurer, Michael E
2013-01-01
The effectiveness of screening programs for cervical cancer has benefited from the inclusion of Human papillomavirus (HPV) DNA assays; which assay to choose, however, is not clear based on previous reviews. Our review addressed test accuracy of Hybrid Capture II (HCII) and polymerase chain reaction (PCR) assays based on studies with stronger designs and with more clinically relevant outcomes. We searched OvidMedline, PubMed, and the Cochrane Library for English language studies comparing both tests, published 1985–2012, with cervical dysplasia defined by the Bethesda classification. Meta-analysis provided pooled sensitivity, specificity, and 95% confidence intervals (CIs); meta-regression identified sources of heterogeneity. From 29 reports, we found that the pooled sensitivity and specificity to detect high-grade squamous intraepithelial lesion (HSIL) was higher for HCII than PCR (0.89 [CI: 0.89–0.90] and 0.85 [CI: 0.84–0.86] vs. 0.73 [CI: 0.73–0.74] and 0.62 [CI: 0.62–0.64]). Both assays had higher accuracy to detect cervical dysplasia in Europe than in Asia-Pacific or North America (diagnostic odd ratio – dOR = 4.08 [CI: 1.39–11.91] and 4.56 [CI: 1.86–11.17] for HCII vs. 2.66 [CI: 1.16–6.53] and 3.78 [CI: 1.50–9.51] for PCR) and accuracy to detect HSIL than atypical squamous cells of undetermined significance (ASCUS)/ low-grade squamous intraepithelial lesion (LSIL) (HCII-dOR = 9.04 [CI: 4.12–19.86] and PCR-dOR = 5.60 [CI: 2.87–10.94]). For HCII, using histology as a gold standard results in higher accuracy than using cytology (dOR = 2.87 [CI: 1.31–6.29]). Based on higher test accuracy, our results support the use of HCII in cervical cancer screening programs. The role of HPV type distribution should be explored to determine the worldwide comparability of HPV test accuracy. PMID:23930214
Navon letters affect face learning and face retrieval.
Lewis, Michael B; Mills, Claire; Hills, Peter J; Weston, Nicola
2009-01-01
Identifying the local letters of a Navon letter (a large letter made up of smaller different letters) prior to recognition causes impairment in accuracy, while identifying the global letters of a Navon letter causes an enhancement in recognition accuracy (Macrae & Lewis, 2002). This effect may result from a transfer-inappropriate processing shift (TIPS) (Schooler, 2002). The present experiment extends research on the underlying mechanism of this effect by exploring this Navon effect on face learning as well as face recognition. The results of the two experiments revealed that when the Navon task used at retrieval was the same as that used at encoding then the performance accuracy is enhanced, whereas when the processing operations mismatch at retrieval and at encoding, this impairs recognition accuracy. These results provide support for the TIPS explanation of the Navon effect.
Exploring the Solar System using stellar occultations
NASA Astrophysics Data System (ADS)
Sicardy, Bruno
2018-04-01
Stellar occultations by solar system objects allow kilometric accuracy, permit the detection of tenuous atmospheres (at nbar level), and the discovery of rings. The main limitation was the prediction accuracy, typically 40 mas, corresponding to about 1,000 km projected at the body. This lead to large time dedicated to astrometry, tedious logistical issues, and more often than not, mere miss of the event. The Gaia catalog, with sub-mas accuracy, hugely improves both the star positions, resulting in achievable accuracies of about 1 mas for the shadow track on Earth. This permits much more carefully planned campaigns, with success rate approaching 100%, weather permitting. Scientific perspectives are presented, e.g. central flashes caused by Plutos atmosphere revealing hazes and winds near its surface, grazing occultations showing topographic features, occultations by Chariklos rings unveiling dynamical features such as proper mode ``breathing''.
Integration deficiencies associated with continuous limb movement sequences in Parkinson's disease.
Park, Jin-Hoon; Stelmach, George E
2009-11-01
The present study examined the extent to which Parkinson's disease (PD) influences integration of continuous limb movement sequences. Eight patients with idiopathic PD and 8 age-matched normal subjects were instructed to perform repetitive sequential aiming movements to specified targets under three-accuracy constraints: 1) low accuracy (W = 7 cm) - minimal accuracy constraint, 2) high accuracy (W = 0.64 cm) - maximum accuracy constraint, and 3) mixed accuracy constraint - one target of high accuracy and another target of low accuracy. The characteristic of sequential movements in the low accuracy condition was mostly cyclical, whereas in the high accuracy condition it was discrete in both groups. When the accuracy constraint was mixed, the sequential movements were executed by assembling discrete and cyclical movements in both groups, suggesting that for PD patients the capability to combine discrete and cyclical movements to meet a task requirement appears to be intact. However, such functional linkage was not as pronounced as was in normal subjects. Close examination of movement from the mixed accuracy condition revealed marked movement hesitations in the vicinity of the large target in PD patients, resulting in a bias toward discrete movement. These results suggest that PD patients may have deficits in ongoing planning and organizing processes during movement execution when the tasks require to assemble various accuracy requirements into more complex movement sequences.
NASA Technical Reports Server (NTRS)
Allton, J. H.
2017-01-01
There is widespread agreement among planetary scientists that much of what we know about the workings of the solar system comes from accurate, high precision measurements on returned samples. Precision is a function of the number of atoms the instrumentation is able to count. Accuracy depends on the calibration or standardization technique. For Genesis, the solar wind sample return mission, acquiring enough atoms to ensure precise SW measurements and then accurately quantifying those measurements were steps known to be non-trivial pre-flight. The difficulty of precise and accurate measurements on returned samples, and why they cannot be made remotely, is not communicated well to the public. In part, this is be-cause "high precision" is abstract and error bars are not very exciting topics. This paper explores ideas for collecting and compiling compelling metaphors and colorful examples as a resource for planetary science public speakers.
NASA Astrophysics Data System (ADS)
Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq
2018-01-01
3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.
Measuring Intermediate-Mass Black-Hole Binaries with Advanced Gravitational Wave Detectors.
Veitch, John; Pürrer, Michael; Mandel, Ilya
2015-10-02
We perform a systematic study to explore the accuracy with which the parameters of intermediate-mass black-hole binary systems can be measured from their gravitational wave (GW) signatures using second-generation GW detectors. We make use of the most recent reduced-order models containing inspiral, merger, and ringdown signals of aligned-spin effective-one-body waveforms to significantly speed up the calculations. We explore the phenomenology of the measurement accuracies for binaries with total masses between 50M(⊙) and 500M(⊙) and mass ratios between 0.1 and 1. We find that (i) at total masses below ∼200M(⊙), where the signal-to-noise ratio is dominated by the inspiral portion of the signal, the chirp mass parameter can be accurately measured; (ii) at higher masses, the information content is dominated by the ringdown, and total mass is measured more accurately; (iii) the mass of the lower-mass companion is poorly estimated, especially at high total mass and more extreme mass ratios; and (iv) spin cannot be accurately measured for our injection set with nonspinning components. Most importantly, we find that for binaries with nonspinning components at all values of the mass ratio in the considered range and at a network signal-to-noise ratio of 15, analyzed with spin-aligned templates, the presence of an intermediate-mass black hole with mass >100M(⊙) can be confirmed with 95% confidence in any binary that includes a component with a mass of 130M(⊙) or greater.
Cacho-Martínez, Pilar; García-Muñoz, Ángel; Ruiz-Cantero, María Teresa
2013-01-01
Purpose To analyze the diagnostic criteria used in the scientific literature published in the past 25 years for accommodative and nonstrabismic binocular dysfunctions and to explore if the epidemiological analysis of diagnostic validity has been used to propose which clinical criteria should be used for diagnostic purposes. Methods We carried out a systematic review of papers on accommodative and non-strabic binocular disorders published from 1986 to 2012 analysing the MEDLINE, CINAHL, PsycINFO and FRANCIS databases. We admitted original articles about diagnosis of these anomalies in any population. We identified 839 articles and 12 studies were included. The quality of included articles was assessed using the QUADAS-2 tool. Results The review shows a wide range of clinical signs and cut-off points between authors. Only 3 studies (regarding accommodative anomalies) assessed diagnostic accuracy of clinical signs. Their results suggest using the accommodative amplitude and monocular accommodative facility for diagnosing accommodative insufficiency and a high positive relative accommodation for accommodative excess. The remaining 9 articles did not analyze diagnostic accuracy, assessing a diagnosis with the criteria the authors considered. We also found differences between studies in the way of considering patients’ symptomatology. 3 studies of 12 analyzed, performed a validation of a symptom survey used for convergence insufficiency. Conclusions Scientific literature reveals differences between authors according to diagnostic criteria for accommodative and nonstrabismic binocular dysfunctions. Diagnostic accuracy studies show that there is only certain evidence for accommodative conditions. For binocular anomalies there is only evidence about a validated questionnaire for convergence insufficiency with no data of diagnostic accuracy. PMID:24646897
NASA Astrophysics Data System (ADS)
Underwood, Emma C.; Ustin, Susan L.; Ramirez, Carlos M.
2007-01-01
We explored the potential of detecting three target invasive species: iceplant ( Carpobrotus edulis), jubata grass ( Cortaderia jubata), and blue gum ( Eucalyptus globulus) at Vandenberg Air Force Base, California. We compared the accuracy of mapping six communities (intact coastal scrub, iceplant invaded coastal scrub, iceplant invaded chaparral, jubata grass invaded chaparral, blue gum invaded chaparral, and intact chaparral) using four images with different combinations of spatial and spectral resolution: hyperspectral AVIRIS imagery (174 wavebands, 4 m spatial resolution), spatially degraded AVIRIS (174 bands, 30 m), spectrally degraded AVIRIS (6 bands, 4 m), and both spatially and spectrally degraded AVIRIS (6 bands, 30 m, i.e., simulated Landsat ETM data). Overall success rates for classifying the six classes was 75% (kappa 0.7) using full resolution AVIRIS, 58% (kappa 0.5) for the spatially degraded AVIRIS, 42% (kappa 0.3) for the spectrally degraded AVIRIS, and 37% (kappa 0.3) for the spatially and spectrally degraded AVIRIS. A true Landsat ETM image was also classified to illustrate that the results from the simulated ETM data were representative, which provided an accuracy of 50% (kappa 0.4). Mapping accuracies using different resolution images are evaluated in the context of community heterogeneity (species richness, diversity, and percent species cover). Findings illustrate that higher mapping accuracies are achieved with images possessing high spectral resolution, thus capturing information across the visible and reflected infrared solar spectrum. Understanding the tradeoffs in spectral and spatial resolution can assist land managers in deciding the most appropriate imagery with respect to target invasives and community characteristics.
NASA Astrophysics Data System (ADS)
Pawłuszek, Kamila; Borkowski, Andrzej
2016-06-01
Since the availability of high-resolution Airborne Laser Scanning (ALS) data, substantial progress in geomorphological research, especially in landslide analysis, has been carried out. First and second order derivatives of Digital Terrain Model (DTM) have become a popular and powerful tool in landslide inventory mapping. Nevertheless, an automatic landslide mapping based on sophisticated classifiers including Support Vector Machine (SVM), Artificial Neural Network or Random Forests is often computationally time consuming. The objective of this research is to deeply explore topographic information provided by ALS data and overcome computational time limitation. For this reason, an extended set of topographic features and the Principal Component Analysis (PCA) were used to reduce redundant information. The proposed novel approach was tested on a susceptible area affected by more than 50 landslides located on Rożnów Lake in Carpathian Mountains, Poland. The initial seven PCA components with 90% of the total variability in the original topographic attributes were used for SVM classification. Comparing results with landslide inventory map, the average user's accuracy (UA), producer's accuracy (PA), and overall accuracy (OA) were calculated for two models according to the classification results. Thereby, for the PCA-feature-reduced model UA, PA, and OA were found to be 72%, 76%, and 72%, respectively. Similarly, UA, PA, and OA in the non-reduced original topographic model, was 74%, 77% and 74%, respectively. Using the initial seven PCA components instead of the twenty original topographic attributes does not significantly change identification accuracy but reduce computational time.
Evaluating Methods of Updating Training Data in Long-Term Genomewide Selection
Neyhart, Jeffrey L.; Tiede, Tyler; Lorenz, Aaron J.; Smith, Kevin P.
2017-01-01
Genomewide selection is hailed for its ability to facilitate greater genetic gains per unit time. Over breeding cycles, the requisite linkage disequilibrium (LD) between quantitative trait loci and markers is expected to change as a result of recombination, selection, and drift, leading to a decay in prediction accuracy. Previous research has identified the need to update the training population using data that may capture new LD generated over breeding cycles; however, optimal methods of updating have not been explored. In a barley (Hordeum vulgare L.) breeding simulation experiment, we examined prediction accuracy and response to selection when updating the training population each cycle with the best predicted lines, the worst predicted lines, both the best and worst predicted lines, random lines, criterion-selected lines, or no lines. In the short term, we found that updating with the best predicted lines or the best and worst predicted lines resulted in high prediction accuracy and genetic gain, but in the long term, all methods (besides not updating) performed similarly. We also examined the impact of including all data in the training population or only the most recent data. Though patterns among update methods were similar, using a smaller but more recent training population provided a slight advantage in prediction accuracy and genetic gain. In an actual breeding program, a breeder might desire to gather phenotypic data on lines predicted to be the best, perhaps to evaluate possible cultivars. Therefore, our results suggest that an optimal method of updating the training population is also very practical. PMID:28315831
Preserved strategic grain-size regulation in memory reporting in patients with schizophrenia.
Akdogan, Elçin; Izaute, Marie; Bacon, Elisabeth
2014-07-15
Cognitive and introspection disturbances are considered core features of schizophrenia. In real life, people are usually free to choose which aspects of an event they recall, how much detail to volunteer, and what degree of confidence to impart. Their decision will depend on various situational and personal goals. The authors explored whether schizophrenia patients are able to achieve a compromise between accuracy and informativeness when reporting semantic information. Twenty-five patients and 23 healthy matched control subjects answered general knowledge questions requiring numerical answers (how high is the Eiffel tower?), freely at first and then through a metamemory-based control. In the second phase, they answered with respect to two predefined intervals, one narrow and one broad; attributed a confidence judgment to both answers; and afterward selected one of the two answers. Data were analyzed using analyses of variance with group as the between-subjects factor. Patients reported information at a self-paced level of precision less accurately than healthy participants. However, they benefited remarkably from the framing of the response and from the metamemory processes of monitoring and control to the point of improving their memory reporting and matching healthy subjects' accuracy. In spite of their memory deficit during free reporting, after accuracy monitoring, patients strategically regulated the grain size of their memory reporting and proved able to manage the competing goals of accuracy and informativeness. These results give some cause for optimism as to the possibility for patients to adapt to everyday life situations. © 2013 Society of Biological Psychiatry Published by Society of Biological Psychiatry All rights reserved.
Solianik, Rima; Satas, Andrius; Mickeviciene, Dalia; Cekanauskaite, Agne; Valanciene, Dovile; Majauskiene, Daiva; Skurvydas, Albertas
2018-06-01
This study aimed to explore the effect of prolonged speed-accuracy motor task on the indicators of psychological, cognitive, psychomotor and motor function. Ten young men aged 21.1 ± 1.0 years performed a fast- and accurate-reaching movement task and a control task. Both tasks were performed for 2 h. Despite decreased motivation, and increased perception of effort as well as subjective feeling of fatigue, speed-accuracy motor task performance improved during the whole period of task execution. After the motor task, the increased working memory function and prefrontal cortex oxygenation at rest and during conflict detection, and the decreased efficiency of incorrect response inhibition and visuomotor tracking were observed. The speed-accuracy motor task increased the amplitude of motor-evoked potentials, while grip strength was not affected. These findings demonstrate that to sustain the performance of 2-h speed-accuracy task under conditions of self-reported fatigue, task-relevant functions are maintained or even improved, whereas less critical functions are impaired.
Advances in Spectral Electrical Impedance Tomography (EIT) for Near-Surface Geophysical Exploration
NASA Astrophysics Data System (ADS)
Huisman, J. A.; Zimmermann, E.; Kelter, M.; Zhao, Y.; Bukhary, T. H.; Vereecken, H.
2016-12-01
Recent advances in spectral Electrical Impedance Tomography (EIT) now allow to obtain the complex electrical conductivity distribution in near-surface environments with a high accuracy for a broad range of frequencies (mHz - kHz). One of the key advances has been the development of correction methods to account for inductive coupling effects between wires used for current and potential measurements and capacitive coupling between cables and the subsurface environment. In this study, we first review these novel correction methods and then illustrate how the consideration of capacitive and inductive coupling improves spectral EIT results. For this, borehole EIT measurements were made in a shallow aquifer using a custom-made EIT system with two electrode chains each consisting of eight active electrodes with a separation of 1 m. The EIT measurements were inverted with and without consideration of inductive and capacitive coupling effects. The inversion results showed that spatially and spectrally consistent imaging results can only be obtained when inductive coupling effects are considered (phase accuracy of 1-2 mrad at 1 kHz). Capacitive coupling effects were found to be of secondary importance for the set-up used here, but its importance will increase when longer cables are used. Although these results are promising, the active electrode chains can only be used with our custom-made EIT system. Therefore, we also explored to what extent EIT measurements with passive electrode chains amenable to commercially available EIT measurement systems can be corrected for coupling effects. It was found that EIT measurements with passive unshielded cables could not be corrected above 100 Hz because of the strong but inaccurately known capacitive coupling between the electrical wires. However, it was possible to correct EIT measurements with passive shielded cables, and the final accuracy of the phase measurements was estimated to be 2-4 mrad at 1 kHz.
Miller, Edward B.; Murrett, Colleen S.; Zhu, Kai; Zhao, Suwen; Goldfeld, Dahlia A.; Bylund, Joseph H.; Friesner, Richard A.
2013-01-01
Robust homology modeling to atomic-level accuracy requires in the general case successful prediction of protein loops containing small segments of secondary structure. Further, as loop prediction advances to success with larger loops, the exclusion of loops containing secondary structure becomes awkward. Here, we extend the applicability of the Protein Local Optimization Program (PLOP) to loops up to 17 residues in length that contain either helical or hairpin segments. In general, PLOP hierarchically samples conformational space and ranks candidate loops with a high-quality molecular mechanics force field. For loops identified to possess α-helical segments, we employ an alternative dihedral library composed of (ϕ,ψ) angles commonly found in helices. The alternative library is searched over a user-specified range of residues that define the helical bounds. The source of these helical bounds can be from popular secondary structure prediction software or from analysis of past loop predictions where a propensity to form a helix is observed. Due to the maturity of our energy model, the lowest energy loop across all experiments can be selected with an accuracy of sub-Ångström RMSD in 80% of cases, 1.0 to 1.5 Å RMSD in 14% of cases, and poorer than 1.5 Å RMSD in 6% of cases. The effectiveness of our current methods in predicting hairpin-containing loops is explored with hairpins up to 13 residues in length and again reaching an accuracy of sub-Ångström RMSD in 83% of cases, 1.0 to 1.5 Å RMSD in 10% of cases, and poorer than 1.5 Å RMSD in 7% of cases. Finally, we explore the effect of an imprecise surrounding environment, in which side chains, but not the backbone, are initially in perturbed geometries. In these cases, loops perturbed to 3Å RMSD from the native environment were restored to their native conformation with sub-Ångström RMSD. PMID:23814507
Agent-based Large-Scale Emergency Evacuation Using Real-Time Open Government Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei; Liu, Cheng; Bhaduri, Budhendra L
The open government initiatives have provided tremendous data resources for the transportation system and emergency services in urban areas. This paper proposes a traffic simulation framework using high temporal resolution demographic data and real time open government data for evacuation planning and operation. A comparison study using real-world data in Seattle, Washington is conducted to evaluate the framework accuracy and evacuation efficiency. The successful simulations of selected area prove the concept to take advantage open government data, open source data, and high resolution demographic data in emergency management domain. There are two aspects of parameters considered in this study: usermore » equilibrium (UE) conditions of traffic assignment model (simple Non-UE vs. iterative UE) and data temporal resolution (Daytime vs. Nighttime). Evacuation arrival rate, average travel time, and computation time are adopted as Measure of Effectiveness (MOE) for evacuation performance analysis. The temporal resolution of demographic data has significant impacts on urban transportation dynamics during evacuation scenarios. Better evacuation performance estimation can be approached by integrating both Non-UE and UE scenarios. The new framework shows flexibility in implementing different evacuation strategies and accuracy in evacuation performance. The use of this framework can be explored to day-to-day traffic assignment to support daily traffic operations.« less
Goher, K M; Almeshal, A M; Agouri, S A; Nasir, A N K; Tokhi, M O; Alenezi, M R; Al Zanki, T; Fadlallah, S O
2017-01-01
This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.
NASA Astrophysics Data System (ADS)
Li, Kesai; Gao, Jie; Ju, Xiaodong; Zhu, Jun; Xiong, Yanchun; Liu, Shuai
2018-05-01
This paper proposes a new tool design of ultra-deep azimuthal electromagnetic (EM) resistivity logging while drilling (LWD) for deeper geosteering and formation evaluation, which can benefit hydrocarbon exploration and development. First, a forward numerical simulation of azimuthal EM resistivity LWD is created based on the fast Hankel transform (FHT) method, and its accuracy is confirmed under classic formation conditions. Then, a reasonable range of tool parameters is designed by analyzing the logging response. However, modern technological limitations pose challenges to selecting appropriate tool parameters for ultra-deep azimuthal detection under detectable signal conditions. Therefore, this paper uses grey relational analysis (GRA) to quantify the influence of tool parameters on voltage and azimuthal investigation depth. After analyzing thousands of simulation data under different environmental conditions, the random forest is used to fit data and identify an optimal combination of tool parameters due to its high efficiency and accuracy. Finally, the structure of the ultra-deep azimuthal EM resistivity LWD tool is designed with a theoretical azimuthal investigation depth of 27.42-29.89 m in classic different isotropic and anisotropic formations. This design serves as a reliable theoretical foundation for efficient geosteering and formation evaluation in high-angle and horizontal (HA/HZ) wells in the future.
Advanced EUV mask and imaging modeling
NASA Astrophysics Data System (ADS)
Evanschitzky, Peter; Erdmann, Andreas
2017-10-01
The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, Andrew W.; Lattimer, James M.; Brown, Edward F.
We investigate constraints on neutron star structure arising from the assumptions that neutron stars have crusts, that recent calculations of pure neutron matter limit the equation of state of neutron star matter near the nuclear saturation density, that the high-density equation of state is limited by causality and the largest high-accuracy neutron star mass measurement, and that general relativity is the correct theory of gravity. We explore the role of prior assumptions by considering two classes of equation of state models. In a first, the intermediate- and high-density behavior of the equation of state is parameterized by piecewise polytropes. Inmore » the second class, the high-density behavior of the equation of state is parameterized by piecewise continuous line segments. The smallest density at which high-density matter appears is varied in order to allow for strong phase transitions above the nuclear saturation density. We critically examine correlations among the pressure of matter, radii, maximum masses, the binding energy, the moment of inertia, and the tidal deformability, paying special attention to the sensitivity of these correlations to prior assumptions about the equation of state. It is possible to constrain the radii of 1.4 solar mass neutron stars to be larger than 10 km, even without consideration of additional astrophysical observations, for example, those from photospheric radius expansion bursts or quiescent low-mass X-ray binaries. We are able to improve the accuracy of known correlations between the moment of inertia and compactness as well as the binding energy and compactness. Furthermore, we also demonstrate the existence of a correlation between the neutron star binding energy and the moment of inertia.« less
NASA Astrophysics Data System (ADS)
Apai, Dániel; Kasper, Markus; Skemer, Andrew; Hanson, Jake R.; Lagrange, Anne-Marie; Biller, Beth A.; Bonnefoy, Mickaël; Buenzli, Esther; Vigan, Arthur
2016-03-01
Time-resolved photometry is an important new probe of the physics of condensate clouds in extrasolar planets and brown dwarfs. Extreme adaptive optics systems can directly image planets, but precise brightness measurements are challenging. We present VLT/SPHERE high-contrast, time-resolved broad H-band near-infrared photometry for four exoplanets in the HR 8799 system, sampling changes from night to night over five nights with relatively short integrations. The photospheres of these four planets are often modeled by patchy clouds and may show large-amplitude rotational brightness modulations. Our observations provide high-quality images of the system. We present a detailed performance analysis of different data analysis approaches to accurately measure the relative brightnesses of the four exoplanets. We explore the information in satellite spots and demonstrate their use as a proxy for image quality. While the brightness variations of the satellite spots are strongly correlated, we also identify a second-order anti-correlation pattern between the different spots. Our study finds that KLIP reduction based on principal components analysis with satellite-spot-modulated artificial-planet-injection-based photometry leads to a significant (˜3×) gain in photometric accuracy over standard aperture-based photometry and reaches 0.1 mag per point accuracy for our data set, the signal-to-noise ratio of which is limited by small field rotation. Relative planet-to-planet photometry can be compared between nights, enabling observations spanning multiple nights to probe variability. Recent high-quality relative H-band photometry of the b-c planet pair agrees to about 1%.
Fisher Center for Alzheimer's Research Foundation
... We are making a major impact on Alzheimer’s research. Our scientific discoveries are featured in top publications ... is vetted by scientists for accuracy. Explore Our Research Nobel Prize Winner Dr. Paul Greengard leads our ...
The Effects of High- and Low-Anxiety Training on the Anticipation Judgments of Elite Performers.
Alder, David; Ford, Paul R; Causer, Joe; Williams, A Mark
2016-02-01
We examined the effects of high- versus low-anxiety conditions during video-based training of anticipation judgments using international-level badminton players facing serves and the transfer to high-anxiety and field-based conditions. Players were assigned to a high-anxiety training (HA), low-anxiety training (LA) or control group (CON) in a pretraining-posttest design. In the pre- and posttest, players anticipated serves from video and on court under high- and low-anxiety conditions. In the video-based high-anxiety pretest, anticipation response accuracy was lower and final fixations shorter when compared with the low-anxiety pretest. In the low-anxiety posttest, HA and LA demonstrated greater accuracy of judgments and longer final fixations compared with pretest and CON. In the high-anxiety posttest, HA maintained accuracy when compared with the low-anxiety posttest, whereas LA had lower accuracy. In the on-court posttest, the training groups demonstrated greater accuracy of judgments compared with the pretest and CON.
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Zhu, Qian; Zhou, Changquan; Qiao, Jiacheng; Zhang, Lu
2018-06-01
Phase-shifting fringe projection profilometry (PSFPP) is a three-dimensional (3D) measurement technique widely adopted in industry measurement. It recovers the 3D profile of measured objects with the aid of the fringe phase. The phase accuracy is among the dominant factors that determine the 3D measurement accuracy. Evaluation of the phase accuracy helps refine adjustable measurement parameters, contributes to evaluating the 3D measurement accuracy, and facilitates improvement of the measurement accuracy. Although PSFPP has been deeply researched, an effective, easy-to-use phase accuracy evaluation method remains to be explored. In this paper, methods based on the uniform-phase coded image (UCI) are presented to accomplish phase accuracy evaluation for PSFPP. These methods work on the principle that the phase value of a UCI can be manually set to be any value, and once the phase value of a UCI pixel is the same as that of a pixel of a corresponding sinusoidal fringe pattern, their phase accuracy values are approximate. The proposed methods provide feasible approaches to evaluating the phase accuracy for PSFPP. Furthermore, they can be used to experimentally research the property of the random and gamma phase errors in PSFPP without the aid of a mathematical model to express random phase error or a large-step phase-shifting algorithm. In this paper, some novel and interesting phenomena are experimentally uncovered with the aid of the proposed methods.
SLS Block 1-B and Exploration Upper Stage Navigation System Design
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Park, Thomas B.; Smith, Austin; Anzalone, Evan; Bernard, Bill; Strickland, Dennis; Geohagan, Kevin; Green, Melissa; Leggett, Jarred
2018-01-01
The SLS Block 1B vehicle is planned to extend NASA's heavy lift capability beyond the initial SLS Block 1 vehicle. The most noticeable change for this vehicle from SLS Block 1 is the swapping of the upper stage from the Interim Cryogenic Propulsion stage (ICPS), a modified Delta IV upper stage, to the more capable Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability and execute more demanding missions so must the SLS Integrated Navigation System to support those missions. The SLS Block 1 vehicle carries two independent navigation systems. The responsibility of the two systems is delineated between ascent and upper stage flight. The Block 1 navigation system is responsible for the phase of flight between the launch pad and insertion into Low-Earth Orbit (LEO). The upper stage system assumes the mission from LEO to payload separation. For the Block 1B vehicle, the two functions are combined into a single system intended to navigate from ground to payload insertion. Both are responsible for self-disposal once payload delivery is achieved. The evolution of the navigation hardware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1-B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1-B vehicle navigation system is designed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. This is measured in terms of payload impact and stage disposal requirements. Additionally, the Block 1-B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and Fault Detection, Isolation, and Recovery (FDIR) logic. The preliminary Block 1B integrated navigation system design is presented along with the challenges associated with meeting the design objectives. This paper also addresses the design considerations associated with the use of Block 1 and Commercial Off-the-Shelf (COTS) avionics for Block 1-B/EUS as part of an integrated vehicle suite for orbital operations.
The construction of high-accuracy schemes for acoustic equations
NASA Technical Reports Server (NTRS)
Tang, Lei; Baeder, James D.
1995-01-01
An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.
Gravity model improvement using GEOS 3 /GEM 9 and 10/. [and Seasat altimetry data
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Wagner, C. A.; Klosko, S. M.; Laubscher, R. E.
1979-01-01
Although errors in previous gravity models have produced large uncertainties in the orbital position of GEOS 3, significant improvement has been obtained with new geopotential solutions, Goddard Earth Model (GEM) 9 and 10. The GEM 9 and 10 solutions for the potential coefficients and station coordinates are presented along with a discussion of the new techniques employed. Also presented and discussed are solutions for three fundamental geodetic reference parameters, viz. the mean radius of the earth, the gravitational constant, and mean equatorial gravity. Evaluation of the gravity field is examined together with evaluation of GEM 9 and 10 for orbit determination accuracy. The major objectives of GEM 9 and 10 are achieved. GEOS 3 orbital accuracies from these models are about 1 m in their radial components for 5-day arc lengths. Both models yield significantly improved results over GEM solutions when compared to surface gravimetry, Skylab and GEOS 3 altimetry, and highly accurate BE-C (Beacon Explorer-C) laser ranges. The new values of the parameters discussed are given.
Li, Yuanyao; Huang, Jinsong; Jiang, Shui-Hua; Huang, Faming; Chang, Zhilu
2017-12-07
It is important to monitor the displacement time series and to explore the failure mechanism of reservoir landslide for early warning. Traditionally, it is a challenge to monitor the landslide displacements real-timely and automatically. Globe Position System (GPS) is considered as the best real-time monitoring technology, however, the accuracies of the landslide displacements monitored by GPS are not assessed effectively. A web-based GPS system is developed to monitor the landslide displacements real-timely and automatically in this study. And the discrete wavelet transform (DWT) is proposed to assess the accuracy of the GPS monitoring displacements. Wangmiao landslide in Three Gorges Reservoir area in China is used as case study. The results show that the web-based GPS system has advantages of high precision, real-time, remote control and automation for landslide monitoring; the Root Mean Square Errors of the monitoring landslide displacements are less than 5 mm. Meanwhile, the results also show that a rapidly falling reservoir water level can trigger the reactivation of Wangmiao landslide. Heavy rainfall is also an important factor, but not a crucial component.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Experiments study on attitude coupling control method for flexible spacecraft
NASA Astrophysics Data System (ADS)
Wang, Jie; Li, Dongxu
2018-06-01
High pointing accuracy and stabilization are significant for spacecrafts to carry out Earth observing, laser communication and space exploration missions. However, when a spacecraft undergoes large angle maneuver, the excited elastic oscillation of flexible appendages, for instance, solar wing and onboard antenna, would downgrade the performance of the spacecraft platform. This paper proposes a coupling control method, which synthesizes the adaptive sliding mode controller and the positive position feedback (PPF) controller, to control the attitude and suppress the elastic vibration simultaneously. Because of its prominent performance for attitude tracking and stabilization, the proposed method is capable of slewing the flexible spacecraft with a large angle. Also, the method is robust to parametric uncertainties of the spacecraft model. Numerical simulations are carried out with a hub-plate system which undergoes a single-axis attitude maneuver. An attitude control testbed for the flexible spacecraft is established and experiments are conducted to validate the coupling control method. Both numerical and experimental results demonstrate that the method discussed above can effectively decrease the stabilization time and improve the attitude accuracy of the flexible spacecraft.
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Drach-Zahavy, Anat; Broyer, Chaya; Dagan, Efrat
2017-09-01
Shared mental models are crucial for constructing mutual understanding of the patient's condition during a clinical handover. Yet, scant research, if any, has empirically explored mental models of the parties involved in a clinical handover. This study aimed to examine the similarities among mental models of incoming and outgoing nurses, and to test their accuracy by comparing them with mental models of expert nurses. A cross-sectional study, exploring nurses' mental models via the concept mapping technique. 40 clinical handovers. Data were collected via concept mapping of the incoming, outgoing, and expert nurses' mental models (total of 120 concept maps). Similarity and accuracy for concepts and associations indexes were calculated to compare the different maps. About one fifth of the concepts emerged in both outgoing and incoming nurses' concept maps (concept similarity=23%±10.6). Concept accuracy indexes were 35%±18.8 for incoming and 62%±19.6 for outgoing nurses' maps. Although incoming nurses absorbed fewer number of concepts and associations (23% and 12%, respectively), they partially closed the gap (35% and 22%, respectively) relative to expert nurses' maps. The correlations between concept similarities, and incoming as well as outgoing nurses' concept accuracy, were significant (r=0.43, p<0.01; r=0.68 p<0.01, respectively). Finally, in 90% of the maps, outgoing nurses added information concerning the processes enacted during the shift, beyond the expert nurses' gold standard. Two seemingly contradicting processes in the handover were identified. "Information loss", captured by the low similarity indexes among the mental models of incoming and outgoing nurses; and "information restoration", based on accuracy measures indexes among the mental models of the incoming nurses. Based on mental model theory, we propose possible explanations for these processes and derive implications for how to improve a clinical handover. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
NASA Astrophysics Data System (ADS)
Lin, S.; Luo, D.; Yanlin, F.; Li, Y.
2016-12-01
Shallow Seismic Reflection (SSR) is a major geophysical exploration method with its exploration depth range, high-resolution in urban active fault exploration. In this paper, we carried out (SSR) and High-resolution refraction (HRR) test in the Liangyun Basin to explore a buried fault. We used NZ distributed 64 channel seismic instrument, 60HZ high sensitivity detector, Geode multi-channel portable acquisition system and hammer source. We selected single side hammer hit multiple overlay, 48 channels received and 12 times of coverage. As there are some coincidence measuring lines of SSR and HRR, we chose multi chase and encounter observation system. Based on the satellite positioning, we arranged 11 survey lines in our study area with total length for 8132 meters. GEOGIGA seismic reflection data processing software was used to deal with the SSR data. After repeated tests from the aspects of single shot record compilation, interference wave pressing, static correction, velocity parameter extraction, dynamic correction, eventually got the shallow seismic reflection profile images. Meanwhile, we used Canadian technology company good refraction and tomographic imaging software to deal with HRR seismic data, which is based on nonlinear first arrival wave travel time tomography. Combined with drilling geological profiles, we explained 11 measured seismic profiles. Results show 18 obvious fault feature breakpoints, including 4 normal faults of south-west, 7 reverse faults of south-west, one normal fault of north-east and 6 reverse faults of north-east. Breakpoints buried depth is 15-18 meters, and the inferred fault distance is 3-12 meters. Comprehensive analysis shows that the fault property is reverse fault with northeast incline section, and fewer branch normal faults presenting southwest incline section. Since good corresponding relationship between the seismic interpretation results, drilling data and SEM results on the property, occurrence, broken length of the fault, we considered the Liangyun fault to be an active fault which has strong activity during the Neogene Pliocene and early Pleistocene, Middle Pleistocene period. The combined application of SSR and HRR can provide more parameters to explain the seismic results, and improve the accuracy of the interpretation.
Golf in the Wind: Exploring the Effect of Wind on the Accuracy of Golf Shots
NASA Astrophysics Data System (ADS)
Yaghoobian, Neda; Mittal, Rajat
2015-11-01
Golf play is highly dependent on the weather conditions with wind being the most significant factor in the unpredictability of the ball landing position. The direction and strength of the wind alters the aerodynamic forces on a ball in flight, and consequently its speed, distance and direction of travel. The fact that local wind conditions on any particular hole change over times-scales ranging all the way from a few seconds to minutes, hours and days introduces an element of variability in the ball trajectory that is not understood. Any such analysis is complicated by the effect of the local terrestrial and vegetation topology, as well as the inherent complexity of golf-ball aerodynamics. In the current study, we use computational modeling to examine the unpredictability of the shots under different wind conditions over Hole-12 at the Augusta National Golf Club, where the Masters Golf Tournament takes place every year. Despite this being the shortest hole on the course, the presence of complex vegetation canopy around this hole introduces a spatial and temporal variability in wind conditions that evokes uncertainty and even fear among professional golfers. We use our model to examine the effect of wind direction and wind-speed on the accuracy of the golf shots at this hole and use the simulations to determine the key aerodynamic factors that affect the accuracy of the shot.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
NASA Astrophysics Data System (ADS)
Fernandez-Borda, R.; Waluschka, E.; Pellicori, S.; Martins, J. V.; Ramos-Izquierdo, L.; Cieslak, J. D.; Thompson, P.
2009-08-01
The design and construction of wide FOV imaging polarimeters for use in atmospheric remote sensing requires significant attention to the prevention of artificial polarization induced by the optical elements. Surface, coatings, and angles of incidence throughout the system must be carefully designed in order to minimize these artifacts because the remaining instrumental bias polarization is the main factor which drives the final polarimetric accuracy of the system. In this work, we present a detailed evaluation and analysis to explore the possibility of retrieving the initial polarization state of the light traveling through a generic system that has inherent instrumental polarization. Our case is a wide FOV lens and a splitter device. In particular, we chose as splitter device a Philips-type prism, because it is able to divide the signal in 3 independent channels that could be simultaneously analyze to retrieve the three first elements of the Stoke vector (in atmospheric applications the elliptical polarization can be neglected [1]). The Philips-type configuration is a versatile, compact and robust prism device that is typically used in three color camera systems. It has been used in some commercial polarimetric cameras which do not claim high accuracy polarization measurements [2]. With this work, we address the accuracy of our polarization inversion and measurements made with the Philips-type beam divider.
Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai
2017-01-01
Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705
Hourihan, Kathleen L.; Benjamin, Aaron S.; Liu, Xiping
2012-01-01
The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness’s claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness. PMID:23162788
NASA Astrophysics Data System (ADS)
Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil
2015-01-01
Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.
Rapid race perception despite individuation and accuracy goals.
Kubota, Jennifer T; Ito, Tiffany
2017-08-01
Perceivers rapidly process social category information and form stereotypic impressions of unfamiliar others. However, a goal to individuate a target or to accurately predict their behavior can result in individuated impressions. It is unknown how the combination of both accuracy and individuation goals affects perceptual category processing. To explore this, participants were given both the goal to individuate targets and accurately predict behavior. We then recorded event-related brain potentials while participants viewed photos of black and white males along with four pieces of individuating information in the form of descriptions of past behavior. Even with explicit individuation and accuracy task goals, participants rapidly differentiated targets by race within 200 ms. Importantly, this rapid categorical processing did not influence behavioral outcomes as participants made individuated predictions. These findings indicate that individuals engage in category processing even when provided with individuation and accuracy goals, but that this processing does not necessarily result in category-based judgments.
An Exploration of Software-Based GNSS Signal Processing at Multiple Frequencies
NASA Astrophysics Data System (ADS)
Pasqual Paul, Manuel; Elosegui, Pedro; Lind, Frank; Vazquez, Antonio; Pankratius, Victor
2017-01-01
The Global Navigation Satellite System (GNSS; i.e., GPS, GLONASS, Galileo, and other constellations) has recently grown into numerous areas that go far beyond the traditional scope in navigation. In the geosciences, for example, high-precision GPS has become a powerful tool for a myriad of geophysical applications such as in geodynamics, seismology, paleoclimate, cryosphere, and remote sensing of the atmosphere. Positioning with millimeter-level accuracy can be achieved through carrier-phase-based, multi-frequency signal processing, which mitigates various biases and error sources such as those arising from ionospheric effects. Today, however, most receivers with multi-frequency capabilities are highly specialized hardware receiving systems with proprietary and closed designs, limited interfaces, and significant acquisition costs. This work explores alternatives that are entirely software-based, using Software-Defined Radio (SDR) receivers as a way to digitize the entire spectrum of interest. It presents an overview of existing open-source frameworks and outlines the next steps towards converting GPS software receivers from single-frequency to dual-frequency, geodetic-quality systems. In the future, this development will lead to a more flexible multi-constellation GNSS processing architecture that can be easily reused in different contexts, as well as to further miniaturization of receivers.
Static Time-of-Flight Secondary Ion Mass Spectrometry (SIMS) | Materials
-Flight Secondary Ion Mass Spectrometry (SIMS) Image of high mass resolution and mass accuracy provided by TOF SIMS We used the high mass resolution and mass accuracy of TOF SIMS to study surface cleanliness acidic wash resulted in contamination by Fe and other metals. Without high mass accuracy, the CaO signal
A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers
NASA Astrophysics Data System (ADS)
Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang
1990-02-01
In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
Zhou, Yangbo; Fox, Daniel S; Maguire, Pierce; O’Connell, Robert; Masters, Robert; Rodenburg, Cornelia; Wu, Hanchun; Dapor, Maurizio; Chen, Ying; Zhang, Hongzhou
2016-01-01
Two-dimensional (2D) materials usually have a layer-dependent work function, which require fast and accurate detection for the evaluation of their device performance. A detection technique with high throughput and high spatial resolution has not yet been explored. Using a scanning electron microscope, we have developed and implemented a quantitative analytical technique which allows effective extraction of the work function of graphene. This technique uses the secondary electron contrast and has nanometre-resolved layer information. The measurement of few-layer graphene flakes shows the variation of work function between graphene layers with a precision of less than 10 meV. It is expected that this technique will prove extremely useful for researchers in a broad range of fields due to its revolutionary throughput and accuracy. PMID:26878907
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Cued Speech Transliteration: Effects of Speaking Rate and Lag Time on Production Accuracy.
Krause, Jean C; Tessler, Morgan P
2016-10-01
Many deaf and hard-of-hearing children rely on interpreters to access classroom communication. Although the exact level of access provided by interpreters in these settings is unknown, it is likely to depend heavily on interpreter accuracy (portion of message correctly produced by the interpreter) and the factors that govern interpreter accuracy. In this study, the accuracy of 12 Cued Speech (CS) transliterators with varying degrees of experience was examined at three different speaking rates (slow, normal, fast). Accuracy was measured with a high-resolution, objective metric in order to facilitate quantitative analyses of the effect of each factor on accuracy. Results showed that speaking rate had a large negative effect on accuracy, caused primarily by an increase in omitted cues, whereas the effect of lag time on accuracy, also negative, was quite small and explained just 3% of the variance. Increased experience level was generally associated with increased accuracy; however, high levels of experience did not guarantee high levels of accuracy. Finally, the overall accuracy of the 12 transliterators, 54% on average across all three factors, was low enough to raise serious concerns about the quality of CS transliteration services that (at least some) children receive in educational settings. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Machine learning approaches to diagnosis and laterality effects in semantic dementia discourse.
Garrard, Peter; Rentoumi, Vassiliki; Gesierich, Benno; Miller, Bruce; Gorno-Tempini, Maria Luisa
2014-06-01
Advances in automatic text classification have been necessitated by the rapid increase in the availability of digital documents. Machine learning (ML) algorithms can 'learn' from data: for instance a ML system can be trained on a set of features derived from written texts belonging to known categories, and learn to distinguish between them. Such a trained system can then be used to classify unseen texts. In this paper, we explore the potential of the technique to classify transcribed speech samples along clinical dimensions, using vocabulary data alone. We report the accuracy with which two related ML algorithms [naive Bayes Gaussian (NBG) and naive Bayes multinomial (NBM)] categorized picture descriptions produced by: 32 semantic dementia (SD) patients versus 10 healthy, age-matched controls; and SD patients with left- (n = 21) versus right-predominant (n = 11) patterns of temporal lobe atrophy. We used information gain (IG) to identify the vocabulary features that were most informative to each of these two distinctions. In the SD versus control classification task, both algorithms achieved accuracies of greater than 90%. In the right- versus left-temporal lobe predominant classification, NBM achieved a high level of accuracy (88%), but this was achieved by both NBM and NBG when the features used in the training set were restricted to those with high values of IG. The most informative features for the patient versus control task were low frequency content words, generic terms and components of metanarrative statements. For the right versus left task the number of informative lexical features was too small to support any specific inferences. An enriched feature set, including values derived from Quantitative Production Analysis (QPA) may shed further light on this little understood distinction. Copyright © 2013 Elsevier Ltd. All rights reserved.
Benbassat, Jochanan; Baumal, Reuben
2010-08-01
To review the reported reliability (reproducibility, inter-examiner agreement) and validity (sensitivity, specificity and likelihood ratios) of respiratory physical examination (PE) signs, and suggest an approach to teaching these signs to medical students. Review of the literature. We searched Paper Chase between 1966 and June 2009 to identify and evaluate published studies on the diagnostic accuracy of respiratory PE signs. Most studies have reported low to fair reliability and sensitivity values. However, some studies have found high specificites for selected PE signs. None of the studies that we reviewed adhered to all of the STARD criteria for reporting diagnostic accuracy. Possible flaws in study designs may have led to underestimates of the observed diagnostic accuracy of respiratory PE signs. The reported poor reliabilities may have been due to differences in the PE skills of the participating examiners, while the sensitivities may have been confounded by variations in the severity of the diseases of the participating patients. IMPLICATION FOR PRACTICE AND MEDICAL EDUCATION: Pending the results of properly controlled studies, the reported poor reliability and sensitivity of most respiratory PE signs do not necessarily detract from their clinical utility. Therefore, we believe that a meticulously performed respiratory PE, which aims to explore a diagnostic hypothesis, as opposed to a PE that aims to detect a disease in an asymptomatic person, remains a cornerstone of clinical practice. We propose teaching the respiratory PE signs according to their importance, beginning with signs of life-threatening conditions and those that have been reported to have a high specificity, and ending with signs that are "nice to know," but are no longer employed because of the availability of more easily performed tests.
Baumal, Reuben
2010-01-01
OBJECTIVE To review the reported reliability (reproducibility, inter-examiner agreement) and validity (sensitivity, specificity and likelihood ratios) of respiratory physical examination (PE) signs, and suggest an approach to teaching these signs to medical students. METHODS Review of the literature. We searched Paper Chase between 1966 and June 2009 to identify and evaluate published studies on the diagnostic accuracy of respiratory PE signs. RESULTS Most studies have reported low to fair reliability and sensitivity values. However, some studies have found high specificites for selected PE signs. None of the studies that we reviewed adhered to all of the STARD criteria for reporting diagnostic accuracy. CONCLUSIONS Possible flaws in study designs may have led to underestimates of the observed diagnostic accuracy of respiratory PE signs. The reported poor reliabilities may have been due to differences in the PE skills of the participating examiners, while the sensitivities may have been confounded by variations in the severity of the diseases of the participating patients. IMPLICATION FOR PRACTICE AND MEDICAL EDUCATION Pending the results of properly controlled studies, the reported poor reliability and sensitivity of most respiratory PE signs do not necessarily detract from their clinical utility. Therefore, we believe that a meticulously performed respiratory PE, which aims to explore a diagnostic hypothesis, as opposed to a PE that aims to detect a disease in an asymptomatic person, remains a cornerstone of clinical practice. We propose teaching the respiratory PE signs according to their importance, beginning with signs of life-threatening conditions and those that have been reported to have a high specificity, and ending with signs that are "nice to know," but are no longer employed because of the availability of more easily performed tests. PMID:20349154
CRISPR-Cas9 therapeutics in cancer: promising strategies and present challenges.
Yi, Lang; Li, Jinming
2016-12-01
Cancer is characterized by multiple genetic and epigenetic alterations that drive malignant cell proliferation and confer chemoresistance. The ability to correct or ablate such mutations holds immense promise for combating cancer. Recently, because of its high efficiency and accuracy, the CRISPR-Cas9 genome editing technique has been widely used in cancer therapeutic explorations. Several studies used CRISPR-Cas9 to directly target cancer cell genomic DNA in cellular and animal cancer models which have shown therapeutic potential in expanding our anticancer protocols. Moreover, CRISPR-Cas9 can also be employed to fight oncogenic infections, explore anticancer drugs, and engineer immune cells and oncolytic viruses for cancer immunotherapeutic applications. Here, we summarize these preclinical CRISPR-Cas9-based therapeutic strategies against cancer, and discuss the challenges and improvements in translating therapeutic CRISPR-Cas9 into clinical use, which will facilitate better application of this technique in cancer research. Further, we propose potential directions of the CRISPR-Cas9 system in cancer therapy. Copyright © 2016 Elsevier B.V. All rights reserved.
The Research of Correlation of Water Surface Spectral and Sediment Parameters
NASA Astrophysics Data System (ADS)
Li, J.; Gong, G.; Fang, W.; Sun, W.
2018-04-01
In the method of survey underwater topography using remote sensing, and the water surface spectral reflectance R, which remote sensing inversion results were closely related to affects by the water and underwater sediment and other aspects, especially in shallow nearshore coastal waters, different sediment types significantly affected the reflectance changes. Therefore, it was of great significance of improving retrieval accuracy to explore the relation of sediment and water surface spectral reflectance. In this study, in order to explore relationship, we used intertidal sediment sand samples in Sheyang estuary, and in the laboratory measured and calculated the chroma indicators, and the water surface spectral reflectance. We found that water surface spectral reflectance had a high correlation with the chroma indicators; research result stated that the color of the sediment had an very important impact on the water surface spectral, especially in Red-Green chroma a*. Also, the research determined the sensitive spectrum bands of the Red-Green chroma a*, which were 636-617 nm, 716-747 nm and 770-792 nm.
Precision Parameter Estimation and Machine Learning
NASA Astrophysics Data System (ADS)
Wandelt, Benjamin D.
2008-12-01
I discuss the strategy of ``Acceleration by Parallel Precomputation and Learning'' (AP-PLe) that can vastly accelerate parameter estimation in high-dimensional parameter spaces and costly likelihood functions, using trivially parallel computing to speed up sequential exploration of parameter space. This strategy combines the power of distributed computing with machine learning and Markov-Chain Monte Carlo techniques efficiently to explore a likelihood function, posterior distribution or χ2-surface. This strategy is particularly successful in cases where computing the likelihood is costly and the number of parameters is moderate or large. We apply this technique to two central problems in cosmology: the solution of the cosmological parameter estimation problem with sufficient accuracy for the Planck data using PICo; and the detailed calculation of cosmological helium and hydrogen recombination with RICO. Since the APPLe approach is designed to be able to use massively parallel resources to speed up problems that are inherently serial, we can bring the power of distributed computing to bear on parameter estimation problems. We have demonstrated this with the CosmologyatHome project.
Using robotics in kinematics classes: exploring braking and stopping distances
NASA Astrophysics Data System (ADS)
Brockington, Guilherme; Schivani, Milton; Barscevicius, Cesar; Raquel, Talita; Pietrocola, Maurício
2018-03-01
Research in the field of physics teaching has revealed high school students’ difficulties in establishing relations between kinematic equations and real movements. Moreover, there are well-known and significant challenges in their comprehension of graphic language content. Thus, this article explores a didactic activity which utilized robotics in order to investigate significant aspects of kinematics, gathering data and performing analyses and descriptions via graphs and mathematical equations which were indispensable for the analysis of the phenomena in question. Traffic safety appears as a main theme, with particular emphasis on the distinction between braking and stopping distances in harsh conditions, as observed in the robot vehicle’s tires and track. This active-learning investigation allows students to identify significant differences between the average value of the initial empirical braking position and that of the vehicle’s programmed braking position, enabling them to more deeply comprehend the relations between mathematical and graphic representations of this real phenomenon and the phenomenon itself, thereby providing a sense of accuracy to this study.
Assessing the Two-Plasmon Decay Threat Through Simulations and Experiments on the NIKE Laser System
NASA Astrophysics Data System (ADS)
Phillips, Lee; Weaver, J. L.; Oh, J.; Schmitt, A. J.; Obenschain, S.
2010-11-01
NIKE is a Krf laser system at the Naval Research Laboratory used to explore hydrodynamic stability, equation of state, and other physics problems arising in IFE research. The comparatively short KrF wavelength is expected to raise the threshold of most parametric instabilities. We report on simulations performed using the FAST3d radiation hydrocode to design TPD experiments that have have allowed us to explore the validity of simple threshold formulas and help establish the accuracy of our simulations. We have also studied proposed high-gain shock ignition designs and devised experiments that can approach the relevant scalelength-temperature regime, allowing us a potential experimental method to study the LPI threat to these designs by direct observation. Through FAST3d studies of shock-ignited and conventional direct-drive designs with KrF (248 nm) and 3rd harmonic (351nm) drivers, we examine the benefits of the shorter wavelength KrF light in reducing the LPI threat.
"If It Feels Right, Do It": Intuitive Decision Making in a Sample of High-Level Sport Coaches.
Collins, Dave; Collins, Loel; Carson, Howie J
2016-01-01
Comprehensive understanding and application of decision making is important for the professional practice and status of sports coaches. Accordingly, building on a strong work base exploring the use of professional judgment and decision making (PJDM) in sport, we report a preliminary investigation into uses of intuition by high-level coaches. Two contrasting groups of high-level coaches from adventure sports (n = 10) and rugby union (n = 8), were interviewed on their experiences of using intuitive and deliberative decision making styles, the source of these skills, and the interaction between the two. Participants reported similarly high levels of usage to other professions. Interaction between the two styles was apparent to varying degrees, while the role of experience was seen as an important precursor to greater intuitive practice and employment. Initially intuitive then deliberate decision making was a particular feature, offering participants an immediate check on the accuracy and validity of the decision. Integration of these data with the extant literature and implications for practice are discussed.
Impact analysis of the transponder time delay on radio-tracking observables
NASA Astrophysics Data System (ADS)
Bertone, Stefano; Le Poncin-Lafitte, Christophe; Rosenblatt, Pascal; Lainey, Valéry; Marty, Jean-Charles; Angonin, Marie-Christine
2018-01-01
Accurate tracking of probes is one of the key points of space exploration. Range and Doppler techniques are the most commonly used. In this paper we analyze the impact of the transponder delay, i . e . the processing time between reception and re-emission of a two-way tracking link at the satellite, on tracking observables and on spacecraft orbits. We show that this term, only partially accounted for in the standard formulation of computed space observables, can actually be relevant for future missions with high nominal tracking accuracies or for the re-processing of old missions. We present several applications of our formulation to Earth flybys, the NASA GRAIL and the ESA BepiColombo missions.
NASA Technical Reports Server (NTRS)
Mckeown, W. L.
1984-01-01
A simulation experiment to explore the use of an augmented pictorial display to approach and land a helicopter in zero visibility conditions was conducted in a fixed base simulator. A literature search was also conducted to determine related work. A display was developed and pilot in-the-loop evaluations were conducted. The pictorial display was a simulated, high resolution radar image, augmented with various parameters to improve distance and motion cues. Approaches and landings were accomplished, but with higher workloads and less accuracy than necessary for a practical system. Recommendations are provided for display improvements and a follow on simulation study in a moving based simulator.
Characterization and Prediction of the SPI Background
NASA Technical Reports Server (NTRS)
Teegarden, B. J.; Jean, P.; Knodlseder, J.; Skinner, G. K.; Weidenspointer, G.
2003-01-01
The INTEGRAL Spectrometer, like most gamma-ray instruments, is background dominated. Signal-to-background ratios of a few percent are typical. The background is primarily due to interactions of cosmic rays in the instrument and spacecraft. It characteristically varies by +/- 5% on time scales of days. This variation is caused mainly by fluctuations in the interplanetary magnetic field that modulates the cosmic ray intensity. To achieve the maximum performance from SPI it is essential to have a high quality model of this background that can predict its value to a fraction of a percent. In this poster we characterize the background and its variability, explore various models, and evaluate the accuracy of their predictions.
Magnetorheological finishing: a perfect solution to nanofinishing requirements
NASA Astrophysics Data System (ADS)
Sidpara, Ajay
2014-09-01
Finishing of optics for different applications is the most important as well as difficult step to meet the specification of optics. Conventional grinding or other polishing processes are not able to reduce surface roughness beyond a certain limit due to high forces acting on the workpiece, embedded abrasive particles, limited control over process, etc. Magnetorheological finishing (MRF) process provides a new, efficient, and innovative way to finish optical materials as well many metals to their desired level of accuracy. This paper provides an overview of MRF process for different applications, important process parameters, requirement of magnetorheological fluid with respect to workpiece material, and some areas that need to be explored for extending the application of MRF process.
Multiple independent identification decisions: a method of calibrating eyewitness identifications.
Pryke, Sean; Lindsay, R C L; Dysart, Jennifer E; Dupuis, Paul
2004-02-01
Two experiments (N = 147 and N = 90) explored the use of multiple independent lineups to identify a target seen live. In Experiment 1, simultaneous face, body, and sequential voice lineups were used. In Experiment 2, sequential face, body, voice, and clothing lineups were used. Both studies demonstrated that multiple identifications (by the same witness) from independent lineups of different features are highly diagnostic of suspect guilt (G. L. Wells & R. C. L. Lindsay, 1980). The number of suspect and foil selections from multiple independent lineups provides a powerful method of calibrating the accuracy of eyewitness identification. Implications for use of current methods are discussed. ((c) 2004 APA, all rights reserved)
NASA Astrophysics Data System (ADS)
Ikhsanti, Mila Izzatul; Bouzida, Rana; Wijaya, Sastra Kusuma; Rohmadi, Muttakin, Imamul; Taruno, Warsito P.
2017-02-01
This research aims to explore the feasibility of capacitance-digital converter and impedance converter for measurement module in electrical capacitance tomography (ECT) system. ECT sensor used was a cylindrical sensor having 8 electrodes. Absolute capacitance measurement system based on Sigma Delta Capacitance-to-Digital-Converter AD7746 has been shown to produce measurement with high resolution. Whereas, capacitance measurement with wide range of frequency is possible using Impedance Converter AD5933. Comparison of measurement accuracy by both AD7746 and AD5933 with reference of LCR meter was evaluated. Biological matters represented in water and oil were treated as object reconstructed into image using linear back projection (LBP) algorithm.
Blower, Sally; Go, Myong-Hyun
2011-07-19
Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.
Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.
1995-01-01
The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
Dahal, Govinda; Qayyum, Adnan; Ferreyra, Mariella; Kassim, Hussein; Pottie, Kevin
2014-10-01
This paper explores immigrant community leaders' perspectives on culturally appropriate diabetes education and care. We conducted exploratory workshops followed by focus groups with Punjabi, Nepali, Somali, and Latin American immigrant communities in Ottawa, Ontario. We used the constant comparative method of grounded theory to explore issues of trust and its impact on access and effectiveness of care. Detailed inquiry revealed the cross cutting theme of trust at the "entry" level and in relation to "accuracy" of diabetes information, as well as the influence of trust on personal "privacy" and on the "uptake" of recommendations. These four dimensions of trust stood out among immigrant community leaders: entry level, accuracy level, privacy level, and intervention level and were considered important attributes of culturally appropriate diabetes education and care. These dimensions of trust may promote trust at the patient-practitioner level and also may help build trust in the health care system.
High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization
1992-05-01
High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization A Thesis Presented by Louis Joseph PoehIman, Captain, USAF B.S., U.S. Air...High Accuracy Attitude Control of a Spacecraft Using Feedback Linearization by Louis Joseph Poehlman, Captain, USAF Submitted to the Department of...31 2-4 Attitude Determination and Control System Architecture ................. 33 3-1 Exact Linearization Using Nonlinear Feedback
NASA Astrophysics Data System (ADS)
Sierk, B.; Caron, J.; Bézy, J.-L.; Löscher, A.; Meijer, Y.; Jurado, P.
2017-11-01
CarbonSat is a candidate mission for ESA's Earth Explorer program, currently undergoing industrial feasibility studies. The primary mission objective is the identification and quantification of regional and local sources and sinks of carbon dioxide (CO2) and methane (CH4). The mission also aims at discriminating natural and anthropogenic fluxes. The space-borne instrument will quantify the spatial distribution of CO2 and CH4 by measuring dry air column-averaged mixing ratios with high precision and accuracy (0.5 ppm for CO2 and 5 ppb for CH4). These products are inferred from spectrally resolved measurements of Earth reflectance in three spectral bands in the Near Infrared (747-773 nm) and Short Wave Infrared (1590-1675 nm and 1925-2095 nm), at high and medium spectral resolution (0.1nm, 0.3 nm, and 0.55 nm). Three spatially co-aligned push-broom imaging spectrometers with a swath width <180 km will acquire observations at a spatial resolution of 2 x 3 km2 , reaching global coverage every 12 days above 40 degrees latitude (30 days at the equator). The targeted product accuracy translates into stringent radiometric, spectral and geometric requirements for the instrument. Because of the high sensitivity of the product retrieval to spurious spectral features of the instrument, special emphasis is placed on constraining relative spectral radiometric errors from polarisation sensitivity, diffuser speckles and stray light. A new requirement formulation targets to simultaneously constrain both the amplitude and the correlation of spectral features with the absorption structures of the targeted gases. The requirement performance analysis of the so-called effective spectral radiometric accuracy (ESRA) establishes a traceable link between instrumental artifacts and the impact on the level-2 products (column-averaged mixing ratios). This paper presents the derivation of system requirements from the demanding mission objectives and report preliminary results of the feasibility studies.
Adjusted Clinical Groups: Predictive Accuracy for Medicaid Enrollees in Three States
Adams, E. Kathleen; Bronstein, Janet M.; Raskind-Hood, Cheryl
2002-01-01
Actuarial split-sample methods were used to assess predictive accuracy of adjusted clinical groups (ACGs) for Medicaid enrollees in Georgia, Mississippi (lagging in managed care penetration), and California. Accuracy for two non-random groups—high-cost and located in urban poor areas—was assessed. Measures for random groups were derived with and without short-term enrollees to assess the effect of turnover on predictive accuracy. ACGs improved predictive accuracy for high-cost conditions in all States, but did so only for those in Georgia's poorest urban areas. Higher and more unpredictable expenses of short-term enrollees moderated the predictive power of ACGs. This limitation was significant in Mississippi due in part, to that State's very high proportion of short-term enrollees. PMID:12545598
Gigahertz single-electron pumping in silicon with an accuracy better than 9.2 parts in 10{sup 7}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamahata, Gento, E-mail: yamahata.gento@lab.ntt.co.jp; Karasawa, Takeshi; Fujiwara, Akira
2016-07-04
High-speed and high-accuracy pumping of a single electron is crucial for realizing an accurate current source, which is a promising candidate for a quantum current standard. Here, using a high-accuracy measurement system traceable to primary standards, we evaluate the accuracy of a Si tunable-barrier single-electron pump driven by a single sinusoidal signal. The pump operates at frequencies up to 6.5 GHz, producing a current of more than 1 nA. At 1 GHz, the current plateau with a level of about 160 pA is found to be accurate to better than 0.92 ppm (parts per million), which is a record value for 1-GHz operation. At 2 GHz,more » the current plateau offset from 1ef (∼320 pA) by 20 ppm is observed. The current quantization accuracy is improved by applying a magnetic field of 14 T, and we observe a current level of 1ef with an accuracy of a few ppm. The presented gigahertz single-electron pumping with a high accuracy is an important step towards a metrological current standard.« less
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
High-field fMRI unveils orientation columns in humans.
Yacoub, Essa; Harel, Noam; Ugurbil, Kâmil
2008-07-29
Functional (f)MRI has revolutionized the field of human brain research. fMRI can noninvasively map the spatial architecture of brain function via localized increases in blood flow after sensory or cognitive stimulation. Recent advances in fMRI have led to enhanced sensitivity and spatial accuracy of the measured signals, indicating the possibility of detecting small neuronal ensembles that constitute fundamental computational units in the brain, such as cortical columns. Orientation columns in visual cortex are perhaps the best known example of such a functional organization in the brain. They cannot be discerned via anatomical characteristics, as with ocular dominance columns. Instead, the elucidation of their organization requires functional imaging methods. However, because of insufficient sensitivity, spatial accuracy, and image resolution of the available mapping techniques, thus far, they have not been detected in humans. Here, we demonstrate, by using high-field (7-T) fMRI, the existence and spatial features of orientation- selective columns in humans. Striking similarities were found with the known spatial features of these columns in monkeys. In addition, we found that a larger number of orientation columns are devoted to processing orientations around 90 degrees (vertical stimuli with horizontal motion), whereas relatively similar fMRI signal changes were observed across any given active column. With the current proliferation of high-field MRI systems and constant evolution of fMRI techniques, this study heralds the exciting prospect of exploring unmapped and/or unknown columnar level functional organizations in the human brain.
Alam, Sarfaraz; Khan, Feroz
2014-01-01
Due to the high mortality rate in India, the identification of novel molecules is important in the development of novel and potent anticancer drugs. Xanthones are natural constituents of plants in the families Bonnetiaceae and Clusiaceae, and comprise oxygenated heterocycles with a variety of biological activities along with an anticancer effect. To explore the anticancer compounds from xanthone derivatives, a quantitative structure activity relationship (QSAR) model was developed by the multiple linear regression method. The structure–activity relationship represented by the QSAR model yielded a high activity–descriptors relationship accuracy (84%) referred by regression coefficient (r2=0.84) and a high activity prediction accuracy (82%). Five molecular descriptors – dielectric energy, group count (hydroxyl), LogP (the logarithm of the partition coefficient between n-octanol and water), shape index basic (order 3), and the solvent-accessible surface area – were significantly correlated with anticancer activity. Using this QSAR model, a set of virtually designed xanthone derivatives was screened out. A molecular docking study was also carried out to predict the molecular interaction between proposed compounds and deoxyribonucleic acid (DNA) topoisomerase IIα. The pharmacokinetics parameters, such as absorption, distribution, metabolism, excretion, and toxicity, were also calculated, and later an appraisal of synthetic accessibility of organic compounds was carried out. The strategy used in this study may provide understanding in designing novel DNA topoisomerase IIα inhibitors, as well as for other cancer targets. PMID:24516330
Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning
Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li
2016-01-01
In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Hans-Erik Andersen; Tobey Clarkin; Ken Winterberger; Jacob Strunk
2009-01-01
The accuracy of recreational- and survey-grade global positioning system (GPS) receivers was evaluated across a range of forest conditions in the Tanana Valley of interior Alaska. High-accuracy check points, established using high-order instruments and closed-traverse surveying methods, were then used to evaluate the accuracy of positions acquired in different forest...
A quality assurance phantom for the performance evaluation of volumetric micro-CT systems
NASA Astrophysics Data System (ADS)
Du, Louise Y.; Umoh, Joseph; Nikolov, Hristo N.; Pollmann, Steven I.; Lee, Ting-Yim; Holdsworth, David W.
2007-12-01
Small-animal imaging has recently become an area of increased interest because more human diseases can be modeled in transgenic and knockout rodents. As a result, micro-computed tomography (micro-CT) systems are becoming more common in research laboratories, due to their ability to achieve spatial resolution as high as 10 µm, giving highly detailed anatomical information. Most recently, a volumetric cone-beam micro-CT system using a flat-panel detector (eXplore Ultra, GE Healthcare, London, ON) has been developed that combines the high resolution of micro-CT and the fast scanning speed of clinical CT, so that dynamic perfusion imaging can be performed in mice and rats, providing functional physiological information in addition to anatomical information. This and other commercially available micro-CT systems all promise to deliver precise and accurate high-resolution measurements in small animals. However, no comprehensive quality assurance phantom has been developed to evaluate the performance of these micro-CT systems on a routine basis. We have designed and fabricated a single comprehensive device for the purpose of performance evaluation of micro-CT systems. This quality assurance phantom was applied to assess multiple image-quality parameters of a current flat-panel cone-beam micro-CT system accurately and quantitatively, in terms of spatial resolution, geometric accuracy, CT number accuracy, linearity, noise and image uniformity. Our investigations show that 3D images can be obtained with a limiting spatial resolution of 2.5 mm-1 and noise of ±35 HU, using an acquisition interval of 8 s at an entrance dose of 6.4 cGy.
Reliable and valid assessment of point-of-care ultrasonography.
Todsen, Tobias; Tolsgaard, Martin Grønnebæk; Olsen, Beth Härstedt; Henriksen, Birthe Merete; Hillingsø, Jens Georg; Konge, Lars; Jensen, Morten Lind; Ringsted, Charlotte
2015-02-01
To explore the reliability and validity of the Objective Structured Assessment of Ultrasound Skills (OSAUS) scale for point-of-care ultrasonography (POC US) performance. POC US is increasingly used by clinicians and is an essential part of the management of acute surgical conditions. However, the quality of performance is highly operator-dependent. Therefore, reliable and valid assessment of trainees' ultrasonography competence is needed to ensure patient safety. Twenty-four physicians, representing novices, intermediates, and experts in POC US, scanned 4 different surgical patient cases in a controlled set-up. All ultrasound examinations were video-recorded and assessed by 2 blinded radiologists using OSAUS. Reliability was examined using generalizability theory. Construct validity was examined by comparing performance scores between the groups and by correlating physicians' OSAUS scores with diagnostic accuracy. The generalizability coefficient was high (0.81) and a D-study demonstrated that 1 assessor and 5 cases would result in similar reliability. The construct validity of the OSAUS scale was supported by a significant difference in the mean scores between the novice group (17.0; SD 8.4) and the intermediate group (30.0; SD 10.1), P = 0.007, as well as between the intermediate group and the expert group (72.9; SD 4.4), P = 0.04, and by a high correlation between OSAUS scores and diagnostic accuracy (Spearman ρ correlation coefficient = 0.76; P < 0.001). This study demonstrates high reliability as well as evidence of construct validity of the OSAUS scale for assessment of POC US competence. Hence, the OSAUS scale may be suitable for both in-training as well as end-of-training assessment.
The Value of Photographic Observations in Improving the Accuracy of Satellite Orbits.
1982-02-01
cameras in the years 1971 -3 have recently become available, particularly of the balloon-satellite Explorer 19, from the observing stations at Riga...from the Russian AFU-75 cameras in the years 1971 -1973 have recently become available, particularly of the balloon- satellite Explorer 19, from the...large numbers of observations frum the Russian AFU-75 cameras have become available, covering the years 1971 -3. The observations, made during the
Ikeda, Hidetoshi; Abe, Takehiko; Watanabe, Kazuo
2010-04-01
Fifty to eighty percent of Cushing disease is diagnosed by typical endocrine responses. Recently, the number of diagnoses of Cushing disease without typical Cushing syndrome has been increasing; therefore, improving ways to determine the localization of the adenoma and making an early diagnosis is important. This study was undertaken to determine the present diagnostic accuracy for Cushing microadenoma and to compare the differences in diagnostic accuracy between MR imaging and PET/MR imaging. During the past 3 years the authors analyzed the diagnostic accuracy in a series of 35 patients with Cushing adenoma that was verified by surgical pituitary exploration. All 35 cases of Cushing disease, including 20 cases of "overt" and 15 cases of "preclinical" Cushing disease, were studied. Superconductive MR images (1.5 or 3.0 T) and composite images from FDG-PET or methionine (MET)-PET and 3.0-T MR imaging were compared with the localization of adenomas verified by surgery. The diagnostic accuracy of superconductive MR imaging for detecting the localization of Cushing microadenoma was only 40%. The causes of unsatisfactory results for superconductive MR imaging were false-negative results (10 cases), false-positive results (6 cases), and instances of double pituitary adenomas (3 cases). In contrast, the accuracy of microadenoma localization using MET-PET/3.0-T MR imaging was 100% and that of FDG-PET/3.0-T MR imaging was 73%. Moreover, the adenoma location was better delineated on MET-PET/MR images than on FDG-PET/MR images. There was no significant difference in maximum standard uptake value of adenomas evaluated by MET-PET between preclinical Cushing disease and overt Cushing disease. Composite MET-PET/3.0-T MR imaging is useful for the improvement of the delineation of Cushing microadenoma and offers high-quality detectability for early-stage Cushing adenoma.
NASA Astrophysics Data System (ADS)
Zheng, Wei; Hsu, Hou-Tse; Zhong, Min; Yun, Mei-Juan
2012-10-01
The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient Vzz is about 2½ times higher than that measured by the three-dimensional gravity gradient Vij. Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10-12/s2, the cumulative geoid height errors using the radial gravity gradient Vzz and three-dimensional gravity gradient Vij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient Vij is improved by 30%-40% on average compared with that using the radial gravity gradient Vzz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10-13/s2-10-15/s2 for precisely producing the next-generation GOCE Follow-On Earth gravity field model with a high spatial resolution.
Development and Performance of an Atomic Interferometer Gravity Gradiometer for Earth Science
NASA Astrophysics Data System (ADS)
Luthcke, S. B.; Saif, B.; Sugarbaker, A.; Rowlands, D. D.; Loomis, B.
2016-12-01
The wealth of multi-disciplinary science achieved from the GRACE mission, the commitment to GRACE Follow On (GRACE-FO), and Resolution 2 from the International Union of Geodesy and Geophysics (IUGG, 2015), highlight the importance to implement a long-term satellite gravity observational constellation. Such a constellation would measure time variable gravity (TVG) with accuracies 50 times better than the first generation missions, at spatial and temporal resolutions to support regional and sub-basin scale multi-disciplinary science. Improved TVG measurements would achieve significant societal benefits including: forecasting of floods and droughts, improved estimates of climate impacts on water cycle and ice sheets, coastal vulnerability, land management, risk assessment of natural hazards, and water management. To meet the accuracy and resolution challenge of the next generation gravity observational system, NASA GSFC and AOSense are currently developing an Atomic Interferometer Gravity Gradiometer (AIGG). This technology is capable of achieving the desired accuracy and resolution with a single instrument, exploiting the advantages of the microgravity environment. The AIGG development is funded under NASA's Earth Science Technology Office (ESTO) Instrument Incubator Program (IIP), and includes the design, build, and testing of a high-performance, single-tensor-component gravity gradiometer for TVG recovery from a satellite in low Earth orbit. The sensitivity per shot is 10-5 Eötvös (E) with a flat spectral bandwidth from 0.3 mHz - 0.03 Hz. Numerical simulations show that a single space-based AIGG in a 326 km altitude polar orbit is capable of exceeding the IUGG target requirement for monthly TVG accuracy of 1 cm equivalent water height at 200 km resolution. We discuss the current status of the AIGG IIP development and estimated instrument performance, and we present results of simulated Earth TVG recovery of the space-based AIGG. We explore the accuracy, and spatial and temporal resolution of surface mass change observations from several space-based implementations of the AIGG instrument, including various orbit configurations and multi-satellite/multi-orbit configurations.
Hao, Xiaohu; Zhang, Guijun; Zhou, Xiaogen
2018-04-01
Computing conformations which are essential to associate structural and functional information with gene sequences, is challenging due to the high dimensionality and rugged energy surface of the protein conformational space. Consequently, the dimension of the protein conformational space should be reduced to a proper level, and an effective exploring algorithm should be proposed. In this paper, a plug-in method for guiding exploration in conformational feature space with Lipschitz underestimation (LUE) for ab-initio protein structure prediction is proposed. The conformational space is converted into ultrafast shape recognition (USR) feature space firstly. Based on the USR feature space, the conformational space can be further converted into Underestimation space according to Lipschitz estimation theory for guiding exploration. As a consequence of the use of underestimation model, the tight lower bound estimate information can be used for exploration guidance, the invalid sampling areas can be eliminated in advance, and the number of energy function evaluations can be reduced. The proposed method provides a novel technique to solve the exploring problem of protein conformational space. LUE is applied to differential evolution (DE) algorithm, and metropolis Monte Carlo(MMC) algorithm which is available in the Rosetta; When LUE is applied to DE and MMC, it will be screened by the underestimation method prior to energy calculation and selection. Further, LUE is compared with DE and MMC by testing on 15 small-to-medium structurally diverse proteins. Test results show that near-native protein structures with higher accuracy can be obtained more rapidly and efficiently with the use of LUE. Copyright © 2018 Elsevier Ltd. All rights reserved.
High accuracy autonomous navigation using the global positioning system (GPS)
NASA Technical Reports Server (NTRS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
Development of a Near Ground Remote Sensing System
Zhang, Yanchao; Xiao, Yuzhao; Zhuang, Zaichun; Zhou, Liping; Liu, Fei; He, Yong
2016-01-01
Unmanned Aerial Vehicles (UAVs) have shown great potential in agriculture and are increasingly being developed for agricultural use. There are still a lot of experiments that need to be done to improve their performance and explore new uses, but experiments using UAVs are limited by many conditions like weather and location and the time it takes to prepare for a flight. To promote UAV remote sensing, a near ground remote sensing platform was developed. This platform consists of three major parts: (1) mechanical structures like a horizontal rail, vertical cylinder, and three axes gimbal; (2) power supply and control parts; (3) onboard application components. This platform covers five degrees of freedom (DOFs): horizontal, vertical, pitch, roll, yaw. A stm32 ARM single chip was used as the controller of the whole platform and another stm32 MCU was used to stabilize the gimbal. The gimbal stabilizer communicates with the main controller via a CAN bus. A multispectral camera was mounted on the gimbal. Software written in C++ language was developed as the graphical user interface. Operating parameters were set via this software and the working status was displayed in this software. To test how well the system works, a laser distance meter was used to measure the slide rail’s repeat accuracy. A 3-axis vibration analyzer was used to test the system stability. Test results show that the horizontal repeat accuracy was less than 2 mm; vertical repeat accuracy was less than 1 mm; vibration was less than 2 g and remained at an acceptable level. This system has high accuracy and stability and can therefore be used for various near ground remote sensing studies. PMID:27164111
Cacho-Martínez, Pilar; García-Muñoz, Ángel; Ruiz-Cantero, María Teresa
2014-01-01
To analyze the diagnostic criteria used in the scientific literature published in the past 25 years for accommodative and nonstrabismic binocular dysfunctions and to explore if the epidemiological analysis of diagnostic validity has been used to propose which clinical criteria should be used for diagnostic purposes. We carried out a systematic review of papers on accommodative and non-strabic binocular disorders published from 1986 to 2012 analysing the MEDLINE, CINAHL, PsycINFO and FRANCIS databases. We admitted original articles about diagnosis of these anomalies in any population. We identified 839 articles and 12 studies were included. The quality of included articles was assessed using the QUADAS-2 tool. The review shows a wide range of clinical signs and cut-off points between authors. Only 3 studies (regarding accommodative anomalies) assessed diagnostic accuracy of clinical signs. Their results suggest using the accommodative amplitude and monocular accommodative facility for diagnosing accommodative insufficiency and a high positive relative accommodation for accommodative excess. The remaining 9 articles did not analyze diagnostic accuracy, assessing a diagnosis with the criteria the authors considered. We also found differences between studies in the way of considering patients' symptomatology. 3 studies of 12 analyzed, performed a validation of a symptom survey used for convergence insufficiency. Scientific literature reveals differences between authors according to diagnostic criteria for accommodative and nonstrabismic binocular dysfunctions. Diagnostic accuracy studies show that there is only certain evidence for accommodative conditions. For binocular anomalies there is only evidence about a validated questionnaire for convergence insufficiency with no data of diagnostic accuracy. Copyright © 2012 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Computer-aided Assessment of Regional Abdominal Fat with Food Residue Removal in CT
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2014-01-01
Rationale and Objectives Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Materials and Methods Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. Results We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Conclusions Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. PMID:24119354
Computer-aided assessment of regional abdominal fat with food residue removal in CT.
Makrogiannis, Sokratis; Caturegli, Giorgio; Davatzikos, Christos; Ferrucci, Luigi
2013-11-01
Separate quantification of abdominal subcutaneous and visceral fat regions is essential to understand the role of regional adiposity as risk factor in epidemiological studies. Fat quantification is often based on computed tomography (CT) because fat density is distinct from other tissue densities in the abdomen. However, the presence of intestinal food residues with densities similar to fat may reduce fat quantification accuracy. We introduce an abdominal fat quantification method in CT with interest in food residue removal. Total fat was identified in the feature space of Hounsfield units and divided into subcutaneous and visceral components using model-based segmentation. Regions of food residues were identified and removed from visceral fat using a machine learning method integrating intensity, texture, and spatial information. Cost-weighting and bagging techniques were investigated to address class imbalance. We validated our automated food residue removal technique against semimanual quantifications. Our feature selection experiments indicated that joint intensity and texture features produce the highest classification accuracy at 95%. We explored generalization capability using k-fold cross-validation and receiver operating characteristic (ROC) analysis with variable k. Losses in accuracy and area under ROC curve between maximum and minimum k were limited to 0.1% and 0.3%. We validated tissue segmentation against reference semimanual delineations. The Dice similarity scores were as high as 93.1 for subcutaneous fat and 85.6 for visceral fat. Computer-aided regional abdominal fat quantification is a reliable computational tool for large-scale epidemiological studies. Our proposed intestinal food residue reduction scheme is an original contribution of this work. Validation experiments indicate very good accuracy and generalization capability. Published by Elsevier Inc.
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
The diagnostic accuracy of the MyDiagnostick to detect atrial fibrillation in primary care
2014-01-01
Background Atrial fibrillation is very common in people aged 65 or older. This condition increases the risk of death, congestive heart failure and thromboembolic conditions. Many patients with atrial fibrillation are asymptomatic and a cerebrovascular accident (CVA) is often the first clinical presentation. Guidelines concerning the prevention of CVA recommend monitoring the heart rate in patients aged 65 or older. Recently, the MyDiagnostick (Applied Biomedical Systems BV, Maastricht, The Netherlands) was introduced as a new screening tool which might serve as an alternative for the less accurate pulse palpation. This study was designed to explore the diagnostic accuracy of the MyDiagnostick for the detection of atrial fibrillation. Methods A phase II diagnostic accuracy study in a convenience sample of 191 subjects recruited in primary care. The majority of participants were patients with a known history of atrial fibrillation (n = 161). Readings of the MyDiagnostick were compared with electrocardiographic recordings. Sensitivity and specificity and their 95% confidence interval were calculated using 2x2 tables. Results A prevalence of 54% for an atrial fibrillation rhythm was found in the study population at the moment of the study. A combination of three measurements with the MyDiagnostick for each patient showed a sensitivity of 94% (95% CI 87 – 98) and a specificity of 93% (95% CI 85 – 97). Conclusion The MyDiagnostick is an easy-to-use device that showed a good diagnostic accuracy with a high sensitivity and specificity for atrial fibrillation in a convenience sample in primary care. Future research is needed to determine the place of the MyDiagnostick in possible screening or case-finding strategies for atrial fibrillation. PMID:24913608
[Automated Assessment for Bone Age of Left Wrist Joint in Uyghur Teenagers by Deep Learning].
Hu, T H; Huo, Z; Liu, T A; Wang, F; Wan, L; Wang, M W; Chen, T; Wang, Y H
2018-02-01
To realize the automated bone age assessment by applying deep learning to digital radiography (DR) image recognition of left wrist joint in Uyghur teenagers, and explore its practical application value in forensic medicine bone age assessment. The X-ray films of left wrist joint after pretreatment, which were taken from 245 male and 227 female Uyghur nationality teenagers in Uygur Autonomous Region aged from 13.0 to 19.0 years old, were chosen as subjects. And AlexNet was as a regression model of image recognition. From the total samples above, 60% of male and female DR images of left wrist joint were selected as net train set, and 10% of samples were selected as validation set. As test set, the rest 30% were used to obtain the image recognition accuracy with an error range in ±1.0 and ±0.7 age respectively, compared to the real age. The modelling results of deep learning algorithm showed that when the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the net train set was 81.4% and 75.6% in male, and 80.5% and 74.8% in female, respectively. When the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the test set was 79.5% and 71.2% in male, and 79.4% and 66.2% in female, respectively. The combination of bone age research on teenagers' left wrist joint and deep learning, which has high accuracy and good feasibility, can be the research basis of bone age automatic assessment system for the rest joints of body. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Xu, Y.; Xia, J.; Miller, R.D.
2007-01-01
The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.
Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching
2016-01-01
Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.
Pires, Gabriel; Nunes, Urbano; Castelo-Branco, Miguel
2012-06-01
Non-invasive brain-computer interface (BCI) based on electroencephalography (EEG) offers a new communication channel for people suffering from severe motor disorders. This paper presents a novel P300-based speller called lateral single-character (LSC). The LSC performance is compared to that of the standard row-column (RC) speller. We developed LSC, a single-character paradigm comprising all letters of the alphabet following an event strategy that significantly reduces the time for symbol selection, and explores the intrinsic hemispheric asymmetries in visual perception to improve the performance of the BCI. RC and LSC paradigms were tested by 10 able-bodied participants, seven participants with amyotrophic lateral sclerosis (ALS), five participants with cerebral palsy (CP), one participant with Duchenne muscular dystrophy (DMD), and one participant with spinal cord injury (SCI). The averaged results, taking into account all participants who were able to control the BCI online, were significantly higher for LSC, 26.11 bit/min and 89.90% accuracy, than for RC, 21.91 bit/min and 88.36% accuracy. The two paradigms produced different waveforms and the signal-to-noise ratio was significantly higher for LSC. Finally, the novel LSC also showed new discriminative features. The results suggest that LSC is an effective alternative to RC, and that LSC still has a margin for potential improvement in bit rate and accuracy. The high bit rates and accuracy of LSC are a step forward for the effective use of BCI in clinical applications. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Evaluating the accuracy of orthophotos and 3D models from UAV photogrammetry
NASA Astrophysics Data System (ADS)
Julge, Kalev; Ellmann, Artu
2015-04-01
Rapid development of unmanned aerial vehicles (UAV) in recent years has made their use for various applications more feasible. This contribution evaluates the accuracy and quality of different UAV remote sensing products (i.e. orthorectified image, point cloud and 3D model). Two different autonomous fixed wing UAV systems were used to collect the aerial photographs. One is a mass-produced commercial UAV system, the other is a similar state-of-the-art UAV system. Three different study areas with varying sizes and characteristics (including urban areas, forests, fields, etc.) were surveyed. The UAV point clouds, 3D models and orthophotos were generated with three different commercial and free-ware software. The performance of each of these was evaluated. The effect of flying height on the accuracy of the results was explored, as well as the optimum number and placement of ground control points. Also the achieved results, when the only georeferencing data originates from the UAV system's on-board GNSS and inertial measurement unit, are investigated. Problems regarding the alignment of certain types of aerial photos (e.g. captured over forested areas) are discussed. The quality and accuracy of UAV photogrammetry products are evaluated by comparing them with control measurements made with GNSS-measurements on the ground, as well as high-resolution airborne laser scanning data and other available orthophotos (e.g. those acquired for large scale national mapping). Vertical comparisons are made on surfaces that have remained unchanged in all campaigns, e.g. paved roads. Planar comparisons are performed by control surveys of objects that are clearly identifiable on orthophotos. The statistics of these differences are used to evaluate the accuracy of UAV remote sensing. Some recommendations are given on how to conduct UAV mapping campaigns cost-effectively and with minimal time-consumption while still ensuring the quality and accuracy of the UAV data products. Also the benefits and drawbacks of UAV remote sensing compared to more traditional methods (e.g. national mapping from airplanes or direct measurements on the ground with GNSS devices or total stations) are outlined.
Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark
2007-12-01
To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.
Adverse Effects in Dual-Star Interferometry
NASA Technical Reports Server (NTRS)
Colavita, M. Mark
2008-01-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the
Remote sensing of atmospheric aerosols with the SPEX spectropolarimeter
NASA Astrophysics Data System (ADS)
van Harten, G.; Rietjens, J.; Smit, M.; Snik, F.; Keller, C. U.; di Noia, A.; Hasekamp, O.; Vonk, J.; Volten, H.
2013-12-01
Characterizing atmospheric aerosols is key to understanding their influence on climate through their direct and indirect radiative forcing. This requires long-term global coverage, at high spatial (~km) and temporal (~days) resolution, which can only be provided by satellite remote sensing. Aerosol load and properties such as particle size, shape and chemical composition can be derived from multi-wavelength radiance and polarization measurements of sunlight that is scattered by the Earth's atmosphere at different angles. The required polarimetric accuracy of ~10^(-3) is very challenging, particularly since the instrument is located on a rapidly moving platform. Our Spectropolarimeter for Planetary EXploration (SPEX) is based on a novel, snapshot spectral modulator, with the intrinsic ability to measure polarization at high accuracy. It exhibits minimal instrumental polarization and is completely solid-state and passive. An athermal set of birefringent crystals in front of an analyzer encodes the incoming linear polarization into a sinusoidal modulation in the intensity spectrum. Moreover, a dual beam implementation yields redundancy that allows for a mutual correction in both the spectrally and spatially modulated data to increase the measurement accuracy. A partially polarized calibration stimulus has been developed, consisting of a carefully depolarized source followed by tilted glass plates to induce polarization in a controlled way. Preliminary calibration measurements show an accuracy of SPEX of well below 10^(-3), with a sensitivity limit of 2*10^(-4). We demonstrate the potential of the SPEX concept by presenting retrievals of aerosol properties based on clear sky measurements using a prototype satellite instrument and a dedicated ground-based SPEX. The retrieval algorithm, originally designed for POLDER data, performs iterative fitting of aerosol properties and surface albedo, where the initial guess is provided by a look-up table. The retrieved aerosol properties, including aerosol optical thickness, single scattering albedo, size distribution and complex refractive index, will be compared with the on-site AERONET sun-photometer, lidar, particle counter and sizer, and PM10 and PM2.5 monitoring instruments. Retrievals of the aerosol layer height based on polarization measurements in the O2A absorption band will be compared with lidar profiles. Furthermore, the possibility of enhancing the retrieval accuracy by replacing the look-up table with a neural network based initial guess will be discussed, using retrievals from simulated ground-based data.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
Wallace, Jonathan; Wang, Martha O; Thompson, Paul; Busso, Mallory; Belle, Vaijayantee; Mammoser, Nicole; Kim, Kyobum; Fisher, John P; Siblani, Ali; Xu, Yueshuo; Welter, Jean F; Lennon, Donald P; Sun, Jiayang; Caplan, Arnold I; Dean, David
2014-03-01
This study tested the accuracy of tissue engineering scaffold rendering via the continuous digital light processing (cDLP) light-based additive manufacturing technology. High accuracy (i.e., <50 µm) allows the designed performance of features relevant to three scale spaces: cell-scaffold, scaffold-tissue, and tissue-organ interactions. The biodegradable polymer poly (propylene fumarate) was used to render highly accurate scaffolds through the use of a dye-initiator package, TiO2 and bis (2,4,6-trimethylbenzoyl)phenylphosphine oxide. This dye-initiator package facilitates high accuracy in the Z dimension. Linear, round, and right-angle features were measured to gauge accuracy. Most features showed accuracies between 5.4-15% of the design. However, one feature, an 800 µm diameter circular pore, exhibited a 35.7% average reduction of patency. Light scattered in the x, y directions by the dye may have reduced this feature's accuracy. Our new fine-grained understanding of accuracy could be used to make further improvements by including corrections in the scaffold design software. Successful cell attachment occurred with both canine and human mesenchymal stem cells (MSCs). Highly accurate cDLP scaffold rendering is critical to the design of scaffolds that both guide bone regeneration and that fully resorb. Scaffold resorption must occur for regenerated bone to be remodeled and, thereby, achieve optimal strength.
Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P
2014-04-16
Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.
Neutron star radii, universal relations, and the role of prior distributions
Steiner, Andrew W.; Lattimer, James M.; Brown, Edward F.
2016-02-02
We investigate constraints on neutron star structure arising from the assumptions that neutron stars have crusts, that recent calculations of pure neutron matter limit the equation of state of neutron star matter near the nuclear saturation density, that the high-density equation of state is limited by causality and the largest high-accuracy neutron star mass measurement, and that general relativity is the correct theory of gravity. We explore the role of prior assumptions by considering two classes of equation of state models. In a first, the intermediate- and high-density behavior of the equation of state is parameterized by piecewise polytropes. Inmore » the second class, the high-density behavior of the equation of state is parameterized by piecewise continuous line segments. The smallest density at which high-density matter appears is varied in order to allow for strong phase transitions above the nuclear saturation density. We critically examine correlations among the pressure of matter, radii, maximum masses, the binding energy, the moment of inertia, and the tidal deformability, paying special attention to the sensitivity of these correlations to prior assumptions about the equation of state. It is possible to constrain the radii of 1.4 solar mass neutron stars to be larger than 10 km, even without consideration of additional astrophysical observations, for example, those from photospheric radius expansion bursts or quiescent low-mass X-ray binaries. We are able to improve the accuracy of known correlations between the moment of inertia and compactness as well as the binding energy and compactness. Furthermore, we also demonstrate the existence of a correlation between the neutron star binding energy and the moment of inertia.« less
NASA Astrophysics Data System (ADS)
Hynek, Bernhard; Binder, Daniel; Boffi, Geo; Schöner, Wolfgang; Verhoeven, Geert
2014-05-01
Terrestrial photogrammetry was the standard method for mapping high mountain terrain in the early days of mountain cartography, until it was replaced by aerial photogrammetry and airborne laser scanning. Modern low-price digital single-lens reflex (DSLR) cameras and highly automatic and cheap digital computer vision software with automatic image matching and multiview-stereo routines suggest the rebirth of terrestrial photogrammetry, especially in remote regions, where airborne surveying methods are expensive due to high flight costs. Terrestrial photogrammetry and modern automated image matching is widely used in geodesy, however, its application in glaciology is still rare, especially for surveying ice bodies at the scale of some km², which is typical for valley glaciers. In August 2013 a terrestrial photogrammetric survey was carried out on Freya Glacier, a 6km² valley glacier next to Zackenberg Research Station in NE-Greenland, where a detailed glacier mass balance monitoring was initiated during the last IPY. Photos with a consumer grade digital camera (Nikon D7100) were taken from the ridges surrounding the glacier. To create a digital elevation model, the photos were processed with the software photoscan. A set of ~100 dGPS surveyed ground control points on the glacier surface was used to georeference and validate the final DEM. Aim of this study was to produce a high resolution and high accuracy DEM of the actual surface topography of the Freya glacier catchment with a novel approach and to explore the potential of modern low-cost terrestrial photogrammetry combined with state-of-the-art automated image matching and multiview-stereo routines for glacier monitoring and to communicate this powerful and cheap method within the environmental research and glacier monitoring community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apai, Dániel; Skemer, Andrew; Hanson, Jake R.
Time-resolved photometry is an important new probe of the physics of condensate clouds in extrasolar planets and brown dwarfs. Extreme adaptive optics systems can directly image planets, but precise brightness measurements are challenging. We present VLT/SPHERE high-contrast, time-resolved broad H-band near-infrared photometry for four exoplanets in the HR 8799 system, sampling changes from night to night over five nights with relatively short integrations. The photospheres of these four planets are often modeled by patchy clouds and may show large-amplitude rotational brightness modulations. Our observations provide high-quality images of the system. We present a detailed performance analysis of different data analysismore » approaches to accurately measure the relative brightnesses of the four exoplanets. We explore the information in satellite spots and demonstrate their use as a proxy for image quality. While the brightness variations of the satellite spots are strongly correlated, we also identify a second-order anti-correlation pattern between the different spots. Our study finds that KLIP reduction based on principal components analysis with satellite-spot-modulated artificial-planet-injection-based photometry leads to a significant (∼3×) gain in photometric accuracy over standard aperture-based photometry and reaches 0.1 mag per point accuracy for our data set, the signal-to-noise ratio of which is limited by small field rotation. Relative planet-to-planet photometry can be compared between nights, enabling observations spanning multiple nights to probe variability. Recent high-quality relative H-band photometry of the b–c planet pair agrees to about 1%.« less
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Investigating the Accuracy of Teachers' Word Frequency Intuitions
ERIC Educational Resources Information Center
McCrostie, James
2007-01-01
Previous research has found that native English speakers can judge, with a relatively high degree of accuracy, the frequency of words in the English language. However, there has been little investigation of the ability to judge the frequency of high and middle frequency words. Similarly, the accuracy of EFL teachers' frequency judgements remains…
Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?
Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin
2014-08-01
Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.
Crew-Aided Autonomous Navigation
NASA Technical Reports Server (NTRS)
Holt, Greg N.
2015-01-01
A sextant provides manual capability to perform star/planet-limb sightings and offers a cheap, simple, robust backup navigation source for exploration missions independent from the ground. Sextant sightings from spacecraft were first exercised in Gemini and flew as the lost-communication backup for all Apollo missions. This study characterized error sources of navigation-grade sextants for feasibility of taking star and planetary limb sightings from inside a spacecraft. A series of similar studies was performed in the early/mid-1960s in preparation for Apollo missions. This study modernized and updated those findings in addition to showing feasibility using Linear Covariance analysis techniques. The human eyeball is a remarkable piece of optical equipment and provides many advantages over camera-based systems, including dynamic range and detail resolution. This technique utilizes those advantages and provides important autonomy to the crew in the event of lost communication with the ground. It can also provide confidence and verification of low-TRL automated onboard systems. The technique is extremely flexible and is not dependent on any particular vehicle type. The investigation involved procuring navigation-grade sextants and characterizing their performance under a variety of conditions encountered in exploration missions. The JSC optical sensor lab and Orion mockup were the primary testing locations. For the accuracy assessment, a group of test subjects took sextant readings on calibrated targets while instrument/operator precision was measured. The study demonstrated repeatability of star/planet-limb sightings with bias and standard deviation around 10 arcseconds, then used high-fidelity simulations to verify those accuracy levels met the needs for targeting mid-course maneuvers in preparation for Earth reen.
Accommodation in Astigmatic Children During Visual Task Performance
Harvey, Erin M.; Miller, Joseph M.; Apple, Howard P.; Parashar, Pavan; Twelker, J. Daniel; Crescioni, Mabel; Davis, Amy L.; Leonard-Green, Tina K.; Campus, Irene; Sherrill, Duane L.
2014-01-01
Purpose. To determine the accuracy and stability of accommodation in uncorrected children during visual task performance. Methods. Subjects were second- to seventh-grade children from a highly astigmatic population. Measurements of noncycloplegic right eye spherical equivalent (Mnc) were obtained while uncorrected subjects performed three visual tasks at near (40 cm) and distance (2 m). Tasks included reading sentences with stimulus letter size near acuity threshold and an age-appropriate letter size (high task demands) and viewing a video (low task demand). Repeated measures ANOVA assessed the influence of astigmatism, task demand, and accommodative demand on accuracy (mean Mnc) and variability (mean SD of Mnc) of accommodation. Results. For near and distance analyses, respectively, sample size was 321 and 247, mean age was 10.37 (SD 1.77) and 10.30 (SD 1.74) years, mean cycloplegic M was 0.48 (SD 1.10) and 0.79 diopters (D) (SD 1.00), and mean astigmatism was 0.99 (SD 1.15) and 0.75 D (SD 0.96). Poor accommodative accuracy was associated with high astigmatism, low task demand (video viewing), and high accommodative demand. The negative effect of accommodative demand on accuracy increased with increasing astigmatism, with the poorest accommodative accuracy observed in high astigmats (≥3.00 D) with high accommodative demand/high hyperopia (1.53 D and 2.05 D of underaccommodation for near and distant stimuli, respectively). Accommodative variability was greatest in high astigmats and was uniformly high across task condition. No/low and moderate astigmats showed higher variability for the video task than the reading tasks. Conclusions. Accuracy of accommodation is reduced in uncorrected children with high astigmatism and high accommodative demand/high hyperopia, but improves with increased visual task demand (reading). High astigmats showed the greatest variability in accommodation. PMID:25103265
NASA Astrophysics Data System (ADS)
Koontz, Craig
Breast cancer is the most prevalent cancer for women with more than 225,000 new cases diagnosed in the United States in 2012 (ACS, 2012). With the high prevalence, comes an increased emphasis on researching new techniques to treat this disease. Accelerated partial breast irradiation (APBI) has been used as an alternative to whole breast irradiation (WBI) in order to treat occult disease after lumpectomy. Similar recurrence rates have been found using ABPI after lumpectomy as with mastectomy alone, but with the added benefit of improved cosmetic and psychological results. Intracavitary brachytherapy devices have been used to deliver the APBI prescription. However, inability to produce asymmetric dose distributions in order to avoid overdosing skin and chest wall has been an issue with these devices. Multi-lumen devices were introduced to overcome this problem. Of these, the Strut-Adjusted Volume Implant (SAVI) has demonstrated the greatest ability to produce an asymmetric dose distribution, which would have greater ability to avoid skin and chest wall dose, and thus allow more women to receive this type of treatment. However, SAVI treatments come with inherent heterogeneities including variable backscatter due to the proximity to the tissue-air and tissue-lung interfaces and variable contents within the cavity created by the SAVI. The dose calculation protocol based on TG-43 does not account for heterogeneities and thus will not produce accurate dosimetry; however Acuros, a model-based dose calculation algorithm manufactured by Varian Medical Systems, claims to accurately account for heterogeneities. Monte Carlo simulation can calculate the dosimetry with high accuracy. In this thesis, a model of the SAVI will be created for Monte Carlo, specifically using MCNP code, in order to explore the affects of heterogeneities on the dose distribution. This data will be compared to TG-43 and Acuros calculated dosimetry to explore their accuracy.
NASA Astrophysics Data System (ADS)
Jokar Arsanjani, Jamal; Vaz, Eric
2015-03-01
Until recently, land surveys and digital interpretation of remotely sensed imagery have been used to generate land use inventories. These techniques however, are often cumbersome and costly, allocating large amounts of technical and temporal costs. The technological advances of web 2.0 have brought a wide array of technological achievements, stimulating the participatory role in collaborative and crowd sourced mapping products. This has been fostered by GPS-enabled devices, and accessible tools that enable visual interpretation of high resolution satellite images/air photos provided in collaborative mapping projects. Such technologies offer an integrative approach to geography by means of promoting public participation and allowing accurate assessment and classification of land use as well as geographical features. OpenStreetMap (OSM) has supported the evolution of such techniques, contributing to the existence of a large inventory of spatial land use information. This paper explores the introduction of this novel participatory phenomenon for land use classification in Europe's metropolitan regions. We adopt a positivistic approach to assess comparatively the accuracy of these contributions of OSM for land use classifications in seven large European metropolitan regions. Thematic accuracy and degree of completeness of OSM data was compared to available Global Monitoring for Environment and Security Urban Atlas (GMESUA) datasets for the chosen metropolises. We further extend our findings of land use within a novel framework for geography, justifying that volunteered geographic information (VGI) sources are of great benefit for land use mapping depending on location and degree of VGI dynamism and offer a great alternative to traditional mapping techniques for metropolitan regions throughout Europe. Evaluation of several land use types at the local level suggests that a number of OSM classes (such as anthropogenic land use, agricultural and some natural environment classes) are viable alternatives for land use classification. These classes are highly accurate and can be integrated into planning decisions for stakeholders and policymakers.
Disclosure of Individual Surgeon's Performance Rates During Informed Consent
Burger, Ingrid; Schill, Kathryn; Goodman, Steven
2007-01-01
Objective: The purpose of the paper is to examine the ethical arguments for and against disclosing surgeon-specific performance rates to patients during informed consent, and to examine the challenges that generating and using performance rates entail. Methods: Ethical, legal, and statistical theory is explored to approach the question of whether, when, and how surgeons should disclosure their personal performance rates to patients. The main ethical question addressed is what type of information surgeons owe their patients during informed consent. This question comprises 3 related, ethically relevant considerations that are explored in detail: 1) Does surgeon-specific performance information enhance patient decision-making? 2) Do patients want this type of information? 3) How do the potential benefits of disclosure balance against the risks? Results: Calculating individual performance measures requires tradeoffs and involves inherent uncertainty. There is a lack of evidence regarding whether patients want this information, whether it facilitates their decision-making for surgery, and how it is best communicated to them. Disclosure of personal performance rates during informed consent has the potential benefits of enhancing patient autonomy, improving patient decision-making, and improving quality of care. The major risks of disclosure include inaccurate and misleading performance rates, avoidance of high-risk cases, unjust damage to surgeon's reputations, and jeopardized patient trust. Conclusion: At this time, we think that, for most conditions, surgical procedures, and outcomes, the accuracy of surgeon- and patient-specific performance rates is illusory, obviating the ethical obligation to communicate them as part of the informed consent process. Nonetheless, the surgical profession has the duty to develop information systems that allow for performance to be evaluated to a high degree of accuracy. In the meantime, patients should be informed of the quantity of procedures their surgeons have performed, providing an idea of the surgeon's experience and qualitative idea of potential risk. PMID:17414595
Sutton, Jennifer E; Buset, Melanie; Keller, Mikayla
2014-01-01
A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context.
Sutton, Jennifer E.; Buset, Melanie; Keller, Mikayla
2014-01-01
A number of careers involve tasks that place demands on spatial cognition, but it is still unclear how and whether skills acquired in such applied experiences transfer to other spatial tasks. The current study investigated the association between pilot training and the ability to form a mental survey representation, or cognitive map, of a novel, ground-based, virtual environment. Undergraduate students who were engaged in general aviation pilot training and controls matched to the pilots on gender and video game usage freely explored a virtual town. Subsequently, participants performed a direction estimation task that tested the accuracy of their cognitive map representation of the town. In addition, participants completed the Object Perspective Test and rated their spatial abilities. Pilots were significantly more accurate than controls at estimating directions but did not differ from controls on the Object Perspective Test. Locations in the town were visited at a similar rate by the two groups, indicating that controls' relatively lower accuracy was not due to failure to fully explore the town. Pilots' superior performance is likely due to better online cognitive processing during exploration, suggesting the spatial updating they engage in during flight transfers to a non-aviation context. PMID:24603608
Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin
2018-01-04
In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment-trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. Copyright © 2018 Montesinos-Lopez et al.
NASA Astrophysics Data System (ADS)
Song, Young-Joo; Kim, Bang-Yeop
2015-09-01
In this work, an efficient method with which to evaluate the high-degree-and-order gravitational harmonics of the nonsphericity of a central body is described and applied to state predictions of a lunar orbiter. Unlike the work of Song et al. (2010), which used a conventional computation method to process gravitational harmonic coefficients, the current work adapted a well-known recursion formula that directly uses fully normalized associated Legendre functions to compute the acceleration due to the non-sphericity of the moon. With the formulated algorithms, the states of a lunar orbiting satellite are predicted and its performance is validated in comparisons with solutions obtained from STK/Astrogator. The predicted differences in the orbital states between STK/Astrogator and the current work all remain at a position of less than 1 m with velocity accuracy levels of less than 1 mm/s, even with different orbital inclinations. The effectiveness of the current algorithm, in terms of both the computation time and the degree of accuracy degradation, is also shown in comparisons with results obtained from earlier work. It is expected that the proposed algorithm can be used as a foundation for the development of an operational flight dynamics subsystem for future lunar exploration missions by Korea. It can also be used to analyze missions which require very close operations to the moon.
Superconducting Meissner effect bearings for cryogenic turbomachines, phase 2
NASA Astrophysics Data System (ADS)
Valenzuela, Javier A.; Martin, Jerry L.
1994-02-01
This is the final report of a Phase 2 SBIR project to develop Meissner effect bearings for miniature cryogenic turbomachines. The bearing system was designed for use in miniature cryogenic turboexpanders in reverse-Brayton-cycle cryocoolers. The cryocoolers are designed to cool sensors on satellites. Existing gas bearings for this application run in a relatively warm state. The heat loss from the bearings into the shaft and into the cold process gas imposes a penalty on the cycle efficiency. By using cold Meissner effect bearings, this heat loss could be minimized, and the input power per unit of cooling for these cryocoolers could be reduced. Two bearing concepts were explored in this project. The first used an all-magnetic passive radial suspension to position the shaft over a range of temperatures from room temperature to 77 K. This bearing concept was proven to be feasible, but impractical for the miniature high-speed turbine application since it lacked the required shaft positioning accuracy. A second bearing concept was then developed. In this concept, the Meissner effect bearings are combined with self-acting gas bearings. The Meissner effect bearing provides the additional stiffness and damping required to stabilize the shaft at low temperature, while the gas bearing provides the necessary accuracy to allow very small turbine tip clearances (5mm) and high speeds (greater than 500,000 rpm).
Prediction of Multiple-Trait and Multiple-Environment Genomic Data Using Recommender Systems
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José C.; Mota-Sanchez, David; Estrada-González, Fermín; Gillberg, Jussi; Singh, Ravi; Mondal, Suchismita; Juliana, Philomin
2018-01-01
In genomic-enabled prediction, the task of improving the accuracy of the prediction of lines in environments is difficult because the available information is generally sparse and usually has low correlations between traits. In current genomic selection, although researchers have a large amount of information and appropriate statistical models to process it, there is still limited computing efficiency to do so. Although some statistical models are usually mathematically elegant, many of them are also computationally inefficient, and they are impractical for many traits, lines, environments, and years because they need to sample from huge normal multivariate distributions. For these reasons, this study explores two recommender systems: item-based collaborative filtering (IBCF) and the matrix factorization algorithm (MF) in the context of multiple traits and multiple environments. The IBCF and MF methods were compared with two conventional methods on simulated and real data. Results of the simulated and real data sets show that the IBCF technique was slightly better in terms of prediction accuracy than the two conventional methods and the MF method when the correlation was moderately high. The IBCF technique is very attractive because it produces good predictions when there is high correlation between items (environment–trait combinations) and its implementation is computationally feasible, which can be useful for plant breeders who deal with very large data sets. PMID:29097376
Evaluating the effectiveness of low cost UAV generated topography for geomorphic change detection
NASA Astrophysics Data System (ADS)
Cook, K. L.
2014-12-01
With the recent explosion in the use and availability of unmanned aerial vehicle platforms and development of easy to use structure from motion software, UAV based photogrammetry is increasingly being adopted to produce high resolution topography for the study of surface processes. UAV systems can vary substantially in price and complexity, but the tradeoffs between these and the quality of the resulting data are not well constrained. We look at one end of this spectrum and evaluate the effectiveness of a simple low cost UAV setup for obtaining high resolution topography in a challenging field setting. Our study site is the Daan River gorge in western Taiwan, a rapidly eroding bedrock gorge that we have monitored with terrestrial Lidar since 2009. The site presents challenges for the generation and analysis of high resolution topography, including vertical gorge walls, vegetation, wide variation in surface roughness, and a complicated 3D morphology. In order to evaluate the accuracy of the UAV-derived topography, we compare it with terrestrial Lidar data collected during the same survey period. Our UAV setup combines a DJI Phantom 2 quadcopter with a 16 megapixel Canon Powershot camera for a total platform cost of less than $850. The quadcopter is flown manually, and the camera is programmed to take a photograph every 5 seconds, yielding 200-250 pictures per flight. We measured ground control points and targets for both the Lidar scans and the aerial surveys using a Leica RTK GPS with 1-2 cm accuracy. UAV derived point clouds were obtained using Agisoft Photoscan software. We conducted both Lidar and UAV surveys before and after a summer typhoon season, allowing us to evaluate the reliability of the UAV survey to detect geomorphic changes in the range of one to several meters. We find that this simple UAV setup can yield point clouds with an average accuracy on the order of 10 cm compared to the Lidar point clouds. Well-distributed and accurately located ground control points are critical, but we achieve good accuracy with even with relatively few ground control points (25) over a 150,000 sq m area. The large number of photographs taken during each flight also allows us to explore the reproducibility of the UAV-derived topography by generating point clouds from different subsets of photographs taken of the same area during a single survey.
External validation of urinary PCA3-based nomograms to individually predict prostate biopsy outcome.
Auprich, Marco; Haese, Alexander; Walz, Jochen; Pummer, Karl; de la Taille, Alexandre; Graefen, Markus; de Reijke, Theo; Fisch, Margit; Kil, Paul; Gontero, Paolo; Irani, Jacques; Chun, Felix K-H
2010-11-01
Prior to safely adopting risk stratification tools, their performance must be tested in an external patient cohort. To assess accuracy and generalizability of previously reported, internally validated, prebiopsy prostate cancer antigen 3 (PCA3) gene-based nomograms when applied to a large, external, European cohort of men at risk of prostate cancer (PCa). Biopsy data, including urinary PCA3 score, were available for 621 men at risk of PCa who were participating in a European multi-institutional study. All patients underwent a ≥10-core prostate biopsy. Biopsy indication was based on suspicious digital rectal examination, persistently elevated prostate-specific antigen level (2.5-10 ng/ml) and/or suspicious histology (atypical small acinar proliferation of the prostate, >/= two cores affected by high-grade prostatic intraepithelial neoplasia in first set of biopsies). PCA3 scores were assessed using the Progensa assay (Gen-Probe Inc, San Diego, CA, USA). According to the previously reported nomograms, different PCA3 score codings were used. The probability of a positive biopsy was calculated using previously published logistic regression coefficients. Predicted outcomes were compared to the actual biopsy results. Accuracy was calculated using the area under the curve as a measure of discrimination; calibration was explored graphically. Biopsy-confirmed PCa was detected in 255 (41.1%) men. Median PCA3 score of biopsy-negative versus biopsy-positive men was 20 versus 48 in the total cohort, 17 versus 47 at initial biopsy, and 37 versus 53 at repeat biopsy (all p≤0.002). External validation of all four previously reported PCA3-based nomograms demonstrated equally high accuracy (0.73-0.75) and excellent calibration. The main limitations of the study reside in its early detection setting, referral scenario, and participation of only tertiary-care centers. In accordance with the original publication, previously developed PCA3-based nomograms achieved high accuracy and sufficient calibration. These novel nomograms represent robust tools and are thus generalizable to European men at risk of harboring PCa. Consequently, in presence of a PCA3 score, these nomograms may be safely used to assist clinicians when prostate biopsy is contemplated. Copyright © 2010 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Exploring cognitive integration of basic science and its effect on diagnostic reasoning in novices.
Lisk, Kristina; Agur, Anne M R; Woods, Nicole N
2016-06-01
Integration of basic and clinical science knowledge is increasingly being recognized as important for practice in the health professions. The concept of 'cognitive integration' places emphasis on the value of basic science in providing critical connections to clinical signs and symptoms while accounting for the fact that clinicians may not spontaneously articulate their use of basic science knowledge in clinical reasoning. In this study we used a diagnostic justification test to explore the impact of integrated basic science instruction on novices' diagnostic reasoning process. Participants were allocated to an integrated basic science or clinical science training group. The integrated basic science group was taught the clinical features along with the underlying causal mechanisms of four musculoskeletal pathologies while the clinical science group was taught only the clinical features. Participants completed a diagnostic accuracy test immediately after initial learning, and one week later a diagnostic accuracy and justification test. The results showed that novices who learned the integrated causal mechanisms had superior diagnostic accuracy and better understanding of the relative importance of key clinical features. These findings further our understanding of cognitive integration by providing evidence of the specific changes in clinical reasoning when basic and clinical sciences are integrated during learning.
Flux Renormalization in Constant Power Burnup Calculations
Isotalo, Aarno E.; Aalto Univ., Otaniemi; Davidson, Gregory G.; ...
2016-06-15
To more accurately represent the desired power in a constant power burnup calculation, the depletion steps of the calculation can be divided into substeps and the neutron flux renormalized on each substep to match the desired power. Here, this paper explores how such renormalization should be performed, how large a difference it makes, and whether using renormalization affects results regarding the relative performance of different neutronics–depletion coupling schemes. When used with older coupling schemes, renormalization can provide a considerable improvement in overall accuracy. With previously published higher order coupling schemes, which are more accurate to begin with, renormalization has amore » much smaller effect. Finally, while renormalization narrows the differences in the accuracies of different coupling schemes, their order of accuracy is not affected.« less
Balconi, M; Cobelli, C
2015-02-26
The present research explored the cortical correlates of emotional memories in response to words and pictures. Subjects' performance (Accuracy Index, AI; response times, RTs; RTs/AI) was considered when a repetitive Transcranial Magnetic Stimulation (rTMS) was applied on the left dorsolateral prefrontal cortex (LDLPFC). Specifically, the role of LDLPFC was tested by performing a memory task, in which old (previously encoded targets) and new (previously not encoded distractors) emotional pictures/words had to be recognized. Valence (positive vs. negative) and arousing power (high vs. low) of stimuli were also modulated. Moreover, subjective evaluation of emotional stimuli in terms of valence/arousal was explored. We found significant performance improving (higher AI, reduced RTs, improved general performance) in response to rTMS. This "better recognition effect" was only related to specific emotional features, that is positive high arousal pictures or words. Moreover no significant differences were found between stimulus categories. A direct relationship was also observed between subjective evaluation of emotional cues and memory performance when rTMS was applied to LDLPFC. Supported by valence and approach model of emotions, we supposed that a left lateralized prefrontal system may induce a better recognition of positive high arousal words, and that evaluation of emotional cue is related to prefrontal activation, affecting the recognition memories of emotions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rietjens, J.; Smit, M.; Hasekamp, O. P.; Grim, M.; Eggens, M.; Eigenraam, A.; Keizer, G.; van Loon, D.; Talsma, J.; van der Vlugt, J.; Wolfs, R.; van Harten, G.; Rheingans, B. E.; Snik, F.; Keller, C. U.; Smit, H.
2016-12-01
A multi-angle spectropolarimeter payload, "SPEX-airborne" has been developed for observing and characterizing aerosols from NASA's high-altitude research aircraft ER-2. SPEX-airborne provides autonomously multi-angle snapshot measurements of spectral radiance and degree of linear polarization over a 7 degree swath in the visible part of the optical spectrum. The instrument is unique in the sense that it combines 30 highly accurate polarimetric measurements with hyperspectral radiance measurements at 2.5 nm resolution simultaneously at nine fixed viewing angles and that it offers the possibility to include polarimetric measurements in absorption bands at lower accuracy. This combination of measurements holds great potential for present and new retrieval algorithms to derive aerosol microphysical properties during airborne campaigns. The opto-mechanical subsystem of SPEX-airborne is based on the Spectropolarimeter for Planetary EXploration (SPEX) prototype, which has been developed over recent years by a consortium of Dutch institutes and industry. The polarimetry technique used is spectral polarization modulation, which has been proven to enable high accuracy polarimetric measurements. In laboratory conditions, the SPEX prototype has a demonstrated polarimetric accuracy of 0.002 in the degree of linear polarization. The SPEX prototype has been made fit for autonomous operation on NASA's ER-2 high altitude platform. In this presentation we will present the design and main subsystems of the payload, and address the operational modes. An outline of the data processing chain including calibration data will be given and the foreseen capability and performance will be discussed. We will discuss the quality of the polarimetric measurement in the lab and as recorded during the maiden flight in 2016 when SPEX-airborne was flying together with JPL's AirMSPI imaging polarimeter. Finally, we will give an outlook on the processing of the data of land and ocean scenes, and on the possibilities for aerosol retrieval algorithms that the SPEX-airborne instrument offers, most notably the flexibility in number and center of the wavelength bands, and the incorporation of (polarimetric) O2A-band measurements.
Jiang, Zhi-Bo; Ren, Wei-Cong; Shi, Yuan-Yuan; Li, Xing-Xing; Lei, Xuan; Fan, Jia-Hui; Zhang, Cong; Gu, Ren-Jie; Wang, Li-Fei; Xie, Yun-Ying; Hong, Bin
2018-05-18
Sansanmycins (SS), one of several known uridyl peptide antibiotics (UPAs) possessing a unique chemical scaffold, showed a good inhibitory effect on the highly refractory pathogens Pseudomonas aeruginosa and Mycobacterium tuberculosis, especially on the multi-drug resistant M. tuberculosis. This study employed high performance liquid chromatography-mass spectrometry detector (HPLC-MSD) ion trap and LTQ orbitrap tandem mass spectrometry (MS/MS) to explore sansanmycin analogues manually and automatically by re-analysis of the Streptomyces sp. SS fermentation broth. The structure-based manual screening method, based on analysis of the fragmentation pathway of known UPAs and on comparisons of the MS/MS spectra with that of sansanmycin A (SS-A), resulted in identifying twenty sansanmycin analogues, including twelve new structures (1-12). Furthermore, to deeply explore sansanmycin analogues, we utilized a GNPS based molecular networking workflow to re-analyze the HPLC-MS/MS data automatically. As a result, eight more new sansanmycins (13-20) were discovered. Compound 1 was discovered to lose two amino acids of residue 1 (AA 1 ) and (2S, 3S)-N 3 -methyl-2,3-diamino butyric acid (DABA) from the N-terminus, and compounds 6, 11 and 12 were found to contain a 2',3'-dehydrated 4',5'-enamine-3'-deoxyuridyl moiety, which have not been reported before. Interestingly, three trace components with novel 5,6-dihydro-5'-aminouridyl group (16-18) were detected for the first time in the sansanmycin-producing strain. Their structures were primarily determined by detail analysis of the data from MS/MS. Compounds 8 and 10 were further confirmed by nuclear magnetic resonance (NMR) data, which proved the efficiency and accuracy of the method of HPLC-MS/MS for exploration of novel UPAs. Comparing to manual screening, the networking method can provide systematic visualization results. Manual screening and networking method may complement with each other to facilitate the mining of novel UPAs. Copyright © 2018 Elsevier B.V. All rights reserved.
Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations
NASA Astrophysics Data System (ADS)
Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong
2017-08-01
Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.
NASA Astrophysics Data System (ADS)
Kirchengast, G.; Schweitzer, S.
2008-12-01
The ACCURATE (Atmospheric Climate and Chemistry in the UTLS Region And climate Trends Explorer) mission was conceived at the Wegener Center in late 2004 and subsequently proposed in 2005 by an international team of more than 20 scientific partners from more than 12 countries to an ESA selection process for next Earth Explorer Missions. While the mission was not selected for formal pre-phase A study, it received very positive evaluation and was recommended for further development and demonstration. ACCURATE employs the occultation measurement principle, known for its unique combination of high vertical resolution, accuracy and long-term stability, in a novel way. It systematically combines use of highly stable signals in the MW 17-23/178-196 GHz bands (LEO-LEO MW crosslink occultation) with laser signals in the SWIR 2-2.5 μm band (LEO-LEO IR laser crosslink occultation) for exploring and monitoring climate and chemistry in the atmosphere with focus on the UTLS region (upper troposphere/lower stratosphere, 5-35 km). The MW occultation is an advanced and at the same time compact version of the LEO-LEO MW occultation concept, studied in 2002-2004 for the ACE+ mission project of ESA for frequencies including the 17-23 GHz band, complemented by U.S. study heritage for frequencies including the 178-196 GHz bands (R. Kursinski et al., Univ. of Arizona, Tucson). The core of ACCURATE is tight synergy of the IR laser crosslinks with the MW crosslinks. The observed parameters, obtained simultaneously and in a self-calibrated manner based on Doppler shift and differential log-transmission profiles, comprise the fundamental thermodynamic variables of the atmosphere (temperature, pressure/geopotential height, humidity) retrieved from the MW bands, complemented by line-of-sight wind, six greenhouse gases (GHGs) and key species of UTLS chemistry (H2O, CO2, CH4, N2O, O3, CO) and four CO2 and H2O isotopes (HDO, H218O, 13CO2, C18OO) from the SWIR band. Furthermore, profiles of aerosol extinction, cloud layering, and turbulence are obtained. All profiles come with accurate height knowledge (< 10 m uncertainty), since measuring height as a function of time is intrinsic to the MW occultation part of ACCURATE. The presentation will introduce ACCURATE along the lines above, with emphasis on the climate science value and the new IR laser occultation capability. The focus will then be on retrieval performance analysis results obtained so far, in particular regarding the profiles of GHGs, isotopes, and wind. The results provide evidence that the GHG and isotope profiles can generally be retrieved within 5-35 km outside clouds with < 1% to 5% rms accuracy at 1-2 km vertical resolution, and wind with < 2 m/s accuracy. Monthly mean climatological profiles, assuming ~40 profiles per climatologic grid box per month, are found unbiased (free of time-varying biases) and at < 0.2% to 0.5% rms accuracy. These encouraging results are discussed in light of the potential of the ACCURATE technique to provide benchmark data for future monitoring of climate, GHGs, and chemistry variability and change. European science and demonstration activities are outlined, including international participation opportunities.
ERIC Educational Resources Information Center
Biesanz, Jeremy C.
2010-01-01
The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
Demonstration of a Fast, Precise Propane Measurement Using Infrared Spectroscopy
NASA Astrophysics Data System (ADS)
Zahniser, M. S.; Roscioli, J. R.; Nelson, D. D.; Herndon, S. C.
2016-12-01
Propane is one of the primary components of emissions from natural gas extraction and processing activities. In addition to being an air pollutant, its ratio to other hydrocarbons such as methane and ethane can serve as a "fingerprint" of a particular facility or process, aiding in identifying emission sources. Quantifying propane has typically required laboratory analysis of flask samples, resulting in low temporal resolution and making plume-based measurements infeasible. Here we demonstrate fast (1-second), high precision (<300 ppt) measurements of propane using high resolution mid-infrared spectroscopy at 2967 wavenumbers. In addition, we explore the impact of nearby water and ethane absorption lines on the accuracy and precision of the propane measurement. Finally, we discuss development of a dual-laser instrument capable of simultaneous measurements of methane, ethane, and propane (the C1-C3 compounds), all within a small spatial package that can be easily deployed aboard a mobile platform.
Falkmer, Marita; Black, Melissa; Tang, Julia; Fitzgerald, Patrick; Girdler, Sonya; Leung, Denise; Ordqvist, Anna; Tan, Tele; Jahan, Ishrat; Falkmer, Torbjorn
2016-01-01
While local bias in visual processing in children with autism spectrum disorders (ASD) has been reported to result in difficulties in recognizing faces and facially expressed emotions, but superior ability in disembedding figures, associations between these abilities within a group of children with and without ASD have not been explored. Possible associations in performance on the Visual Perception Skills Figure-Ground test, a face recognition test and an emotion recognition test were investigated within 25 8-12-years-old children with high-functioning autism/Asperger syndrome, and in comparison to 33 typically developing children. Analyses indicated a weak positive correlation between accuracy in Figure-Ground recognition and emotion recognition. No other correlation estimates were significant. These findings challenge both the enhanced perceptual function hypothesis and the weak central coherence hypothesis, and accentuate the importance of further scrutinizing the existance and nature of local visual bias in ASD.
NASA Astrophysics Data System (ADS)
Ren, Changzhi; Li, Xiaoyan; Song, Xiaoli; Niu, Yong; Li, Aihua; Zhang, Zhenchao
2012-09-01
Direct drive technology is the key to solute future 30-m and larger telescope motion system to guarantee a very high tracking accuracy, in spite of unbalanced and sudden loads such as wind gusts and in spite of a structure that, because of its size, can not be infinitely stiff. However, this requires the design and realization of unusually large torque motor that the torque slew rate must be extremely steep too. A conventional torque motor design appears inadequate. This paper explores one redundant unit permanent magnet synchronous motor and its simulation bed for 30-m class telescope. Because its drive system is one high integrated electromechanical system, one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. This paper discusses the design and control of the precise tracking simulation bed in detail.
Scintillation gamma spectrometer for analysis of hydraulic fracturing waste products.
Ying, Leong; O'Connor, Frank; Stolz, John F
2015-01-01
Flowback and produced wastewaters from unconventional hydraulic fracturing during oil and gas explorations typically brings to the surface Naturally Occurring Radioactive Materials (NORM), predominantly radioisotopes from the U238 and Th232 decay chains. Traditionally, radiological sampling are performed by sending collected small samples for laboratory tests either by radiochemical analysis or measurements by a high-resolution High-Purity Germanium (HPGe) gamma spectrometer. One of the main isotopes of concern is Ra226 which requires an extended 21-days quantification period to allow for full secular equilibrium to be established for the alpha counting of its progeny daughter Rn222. Field trials of a sodium iodide (NaI) scintillation detector offers a more economic solution for rapid screenings of radiological samples. To achieve the quantification accuracy, this gamma spectrometer must be efficiency calibrated with known standard sources prior to field deployments to analyze the radioactivity concentrations in hydraulic fracturing waste products.
The applications of deep neural networks to sdBV classification
NASA Astrophysics Data System (ADS)
Boudreaux, Thomas M.
2017-12-01
With several new large-scale surveys on the horizon, including LSST, TESS, ZTF, and Evryscope, faster and more accurate analysis methods will be required to adequately process the enormous amount of data produced. Deep learning, used in industry for years now, allows for advanced feature detection in minimally prepared datasets at very high speeds; however, despite the advantages of this method, its application to astrophysics has not yet been extensively explored. This dearth may be due to a lack of training data available to researchers. Here we generate synthetic data loosely mimicking the properties of acoustic mode pulsating stars and we show that two separate paradigms of deep learning - the Artificial Neural Network And the Convolutional Neural Network - can both be used to classify this synthetic data effectively. And that additionally this classification can be performed at relatively high levels of accuracy with minimal time spent adjusting network hyperparameters.
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight
Vision Algorithm for the Solar Aspect System of the HEROES Mission
NASA Technical Reports Server (NTRS)
Cramer, Alexander; Christe, Steven; Shih, Albert
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high-accuracy pitch and yaw pointing solutions relative to the sun for the High Energy Replicated Optics to Explore the Sun (HEROES) mission. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small fiducial markers. Images of this plate were processed in real time to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an Average Intersection method, fiducial detection by a matched filter approach, identification with an ad-hoc method based on the spacing between fiducials, and image registration with a simple least squares fit. Performance is verified on a combination of artificially generated images, test data recorded on the ground, and images from the 2013 flight.
Thematic Accuracy Assessment of the 2011 National Land ...
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
Kenngott, Hannes Götz; Preukschas, Anas Amin; Wagner, Martin; Nickel, Felix; Müller, Michael; Bellemann, Nadine; Stock, Christian; Fangerau, Markus; Radeleff, Boris; Kauczor, Hans-Ulrich; Meinzer, Hans-Peter; Maier-Hein, Lena; Müller-Stich, Beat Peter
2018-06-01
Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient bedside.
Wang, J; Wang, X; Wang, W Y; Liu, J Q; Xing, Z Y; Wang, X
2016-07-01
To explore the feasibility, safety and clinical application value of sentinel lymph node biopsy(SLNB)in patients with breast cancer after local lumpectomy. Clinical data of 195 patients who previously received local lumpectomy from January 2005 to April 2015 were retrospectively analyzed. All the patients with pathologic stage T1-2N0M0 (T1-2N0M0) breast cancer underwent SLNB. Methylene blue, carbon nanoparticles suspension, technetium-99m-labeled dextran, or in combination were used in the SLNB. The interval from lumpectomy to SLNB was 1-91 days(mean, 18.3 days)and the maximum diameter of tumors before first operation was 0.2-4.5 cm (mean, 1.8 cm). The sentinel lymph node was successfully found in all the cases and the detection rate was 100%. 42 patients received axillary lymph node dissection (ALND), 19 patients had pathologically positive sentinel lymph node, with an accuracy rate of 97.6%, sensitivity of 95.0%, false negative rate of 5.0%, and specificity of 100%, and the false positive rate was 0. Logistic regression analysis suggested that the age of patients was significantly associated with sentinel lymph node metastasis after local lumpectomy. For early breast cancer and after breast tumor biopsy, the influence of local lumpectomy on detection rate and accuracy of sentinel lymph node is not significant. Sentinel lymph node biopsy with appropriately chosen tracing technique may still provide a high detection rate and accuracy.
Vector disparity sensor with vergence control for active vision systems.
Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo
2012-01-01
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.
Pressure-Sensitive Paint Measurements on Surfaces with Non-Uniform Temperature
NASA Technical Reports Server (NTRS)
Bencic, Timothy J.
1999-01-01
Pressure-sensitive paint (PSP) has become a useful tool to augment conventional pressure taps in measuring the surface pressure distribution of aerodynamic components in wind tunnel testing. While the PSP offers the advantage of a non-intrusive global mapping of the surface pressure, one prominent drawback to the accuracy of this technique is the inherent temperature sensitivity of the coating's luminescent intensity. A typical aerodynamic surface PSP test has relied on the coated surface to be both spatially and temporally isothermal, along with conventional instrumentation for an in situ calibration to generate the highest accuracy pressure mappings. In some tests however, spatial and temporal thermal gradients are generated by the nature of the test as in a blowing jet impinging on a surface. In these cases, the temperature variations on the painted surface must be accounted for in order to yield high accuracy and reliable data. A new temperature correction technique was developed at NASA Lewis to collapse a "family" of PSP calibration curves to a single intensity ratio versus pressure curve. This correction allows a streamlined procedure to be followed whether or not temperature information is used in the data reduction of the PSP. This paper explores the use of conventional instrumentation such as thermocouples and pressure taps along with temperature-sensitive paint (TSP) to correct for the thermal gradients that exist in aeropropulsion PSP tests. Temperature corrected PSP measurements for both a supersonic mixer ejector and jet cavity interaction tests are presented.
Application of Deep Learning in GLOBELAND30-2010 Product Refinement
NASA Astrophysics Data System (ADS)
Liu, T.; Chen, X.
2018-04-01
GlobeLand30, as one of the best Global Land Cover (GLC) product at 30-m resolution, has been widely used in many research fields. Due to the significant spectral confusion among different land cover types and limited textual information of Landsat data, the overall accuracy of GlobeLand30 is about 80 %. Although such accuracy is much higher than most other global land cover products, it cannot satisfy various applications. There is still a great need of an effective method to improve the quality of GlobeLand30. The explosive high-resolution satellite images and remarkable performance of Deep Learning on image classification provide a new opportunity to refine GlobeLand30. However, the performance of deep leaning depends on quality and quantity of training samples as well as model training strategy. Therefore, this paper 1) proposed an automatic training sample generation method via Google earth to build a large training sample set; and 2) explore the best training strategy for land cover classification using GoogleNet (Inception V3), one of the most widely used deep learning network. The result shows that the fine-tuning from first layer of Inception V3 using rough large sample set is the best strategy. The retrained network was then applied in one selected area from Xi'an city as a case study of GlobeLand30 refinement. The experiment results indicate that the proposed approach with Deep Learning and google earth imagery is a promising solution for further improving accuracy of GlobeLand30.
Vector Disparity Sensor with Vergence Control for Active Vision Systems
Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo
2012-01-01
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737
Rapid and Accurate Sequencing of Enterovirus Genomes Using MinION Nanopore Sequencer.
Wang, Ji; Ke, Yue Hua; Zhang, Yong; Huang, Ke Qiang; Wang, Lei; Shen, Xin Xin; Dong, Xiao Ping; Xu, Wen Bo; Ma, Xue Jun
2017-10-01
Knowledge of an enterovirus genome sequence is very important in epidemiological investigation to identify transmission patterns and ascertain the extent of an outbreak. The MinION sequencer is increasingly used to sequence various viral pathogens in many clinical situations because of its long reads, portability, real-time accessibility of sequenced data, and very low initial costs. However, information is lacking on MinION sequencing of enterovirus genomes. In this proof-of-concept study using Enterovirus 71 (EV71) and Coxsackievirus A16 (CA16) strains as examples, we established an amplicon-based whole genome sequencing method using MinION. We explored the accuracy, minimum sequencing time, discrimination and high-throughput sequencing ability of MinION, and compared its performance with Sanger sequencing. Within the first minute (min) of sequencing, the accuracy of MinION was 98.5% for the single EV71 strain and 94.12%-97.33% for 10 genetically-related CA16 strains. In as little as 14 min, 99% identity was reached for the single EV71 strain, and in 17 min (on average), 99% identity was achieved for 10 CA16 strains in a single run. MinION is suitable for whole genome sequencing of enteroviruses with sufficient accuracy and fine discrimination and has the potential as a fast, reliable and convenient method for routine use. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.
SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier
Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru
2014-01-01
Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306
Learning to combine high variability with high precision: lack of transfer to a different task.
Wu, Yen-Hsun; Truglio, Thomas S; Zatsiorsky, Vladimir M; Latash, Mark L
2015-01-01
The authors studied effects of practicing a 4-finger accurate force production task on multifinger coordination quantified within the uncontrolled manifold hypothesis. During practice, task instability was modified by changing visual feedback gain based on accuracy of performance. The authors also explored the retention of these effects, and their transfer to a prehensile task. Subjects practiced the force production task for 2 days. After the practice, total force variability decreased and performance became more accurate. In contrast, variance of finger forces showed a tendency to increase during the first practice session while in the space of finger modes (hypothetical commands to fingers) the increase was under the significance level. These effects were retained for 2 weeks. No transfer of these effects to the prehensile task was seen, suggesting high specificity of coordination changes. The retention of practice effects without transfer to a different task suggests that further studies on a more practical method of improving coordination are needed.
Martinez, G T; Rosenauer, A; De Backer, A; Verbeeck, J; Van Aert, S
2014-02-01
High angle annular dark field scanning transmission electron microscopy (HAADF STEM) images provide sample information which is sensitive to the chemical composition. The image intensities indeed scale with the mean atomic number Z. To some extent, chemically different atomic column types can therefore be visually distinguished. However, in order to quantify the atomic column composition with high accuracy and precision, model-based methods are necessary. Therefore, an empirical incoherent parametric imaging model can be used of which the unknown parameters are determined using statistical parameter estimation theory (Van Aert et al., 2009, [1]). In this paper, it will be shown how this method can be combined with frozen lattice multislice simulations in order to evolve from a relative toward an absolute quantification of the composition of single atomic columns with mixed atom types. Furthermore, the validity of the model assumptions are explored and discussed. © 2013 Published by Elsevier B.V. All rights reserved.
High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Growth Fundamentals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, Robert; Lewis, Brett B.; Fowlkes, Jason Davidson
While 3D-printing is currently experiencing significant growth and having a significant impact on science and technology, the expansion into the nanoworld is still a highly challenging task. Among the increasing number of approaches, focused electron-beam-induced deposition (FEBID) was recently demonstrated to be a viable candidate toward a generic direct-write fabrication technology with spatial nanometer accuracy for complex shaped 3D-nanoarchitectures. In this comprehensive study, we explore the parameter space for 3D-FEBID and investigate the implications of individual and interdependent parameters on freestanding nanosegments, which act as a fundamental building block for complex 3D-structures. In particular, the study provides new basic insightsmore » such as precursor transport limitations and angle dependent growth rates, both essential for high-fidelity fabrication. In conclusion, complemented by practical aspects, we provide both basic insights in 3D-growth dynamics and technical guidance for specific process adaption to enable predictable and reliable direct-write synthesis of freestanding 3D-nanoarchitectures.« less
NASA Technical Reports Server (NTRS)
Cramer, Alexander Krishnan
2014-01-01
This work covers the design and test of a machine vision algorithm for generating high- accuracy pitch and yaw pointing solutions relative to the sun on a high altitude balloon. It describes how images were constructed by focusing an image of the sun onto a plate printed with a pattern of small cross-shaped fiducial markers. Images of this plate taken with an off-the-shelf camera were processed to determine relative position of the balloon payload to the sun. The algorithm is broken into four problems: circle detection, fiducial detection, fiducial identification, and image registration. Circle detection is handled by an "Average Intersection" method, fiducial detection by a matched filter approach, and identification with an ad-hoc method based on the spacing between fiducials. Performance is verified on real test data where possible, but otherwise uses artificially generated data. Pointing knowledge is ultimately verified to meet the 20 arcsecond requirement.
High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Growth Fundamentals
Winkler, Robert; Lewis, Brett B.; Fowlkes, Jason Davidson; ...
2018-02-14
While 3D-printing is currently experiencing significant growth and having a significant impact on science and technology, the expansion into the nanoworld is still a highly challenging task. Among the increasing number of approaches, focused electron-beam-induced deposition (FEBID) was recently demonstrated to be a viable candidate toward a generic direct-write fabrication technology with spatial nanometer accuracy for complex shaped 3D-nanoarchitectures. In this comprehensive study, we explore the parameter space for 3D-FEBID and investigate the implications of individual and interdependent parameters on freestanding nanosegments, which act as a fundamental building block for complex 3D-structures. In particular, the study provides new basic insightsmore » such as precursor transport limitations and angle dependent growth rates, both essential for high-fidelity fabrication. In conclusion, complemented by practical aspects, we provide both basic insights in 3D-growth dynamics and technical guidance for specific process adaption to enable predictable and reliable direct-write synthesis of freestanding 3D-nanoarchitectures.« less
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhong, Lin; Tang, Yuan-Jiao; Yang, Yu-Jia; Qiu, Li
2017-01-01
To explore the value of high frequency color doppler ultrasonography in differentiating benign and malignant skin solid tumors. Clinical and ultrasonic data of cutaneous solid tumors confirmed by pathology in our hospital were collected. The differences in clinical and sonographic features between benign and malignant tumors were statistically analyzed. A total of 512 patients, involving 527 cases of skin solid tumors, were enrolled in this study. The ultrasonic detected 99.43% of the cases, with 99.02% accuracy in locating the lesions. The benign and malignant tumors showed differences in patient age, location, multiple occurance, location and depth, surface skin condition, tumor size, echo, morphology, uniformity, calcification, blood flow status, tumor rear area and peripheral echo, and pathological requests ( P <0.05). High frequency ultrasound has excellent detection rate of skin tumors, which can locate invasion depth of skin accurately. Benign and malignant skin tumors show differences in a number of clinical and ultrasound features.
Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)
Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne
2017-01-01
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other land cover change accuracy assessments.
Investigations of fluid-strain interaction using Plate Boundary Observatory borehole data
NASA Astrophysics Data System (ADS)
Boyd, Jeffrey Michael
Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.
Relationship between accuracy and complexity when learning underarm precision throwing.
Valle, Maria Stella; Lombardo, Luciano; Cioni, Matteo; Casabona, Antonino
2018-06-12
Learning precision ball throwing was mostly studied to explore the early rapid improvement of accuracy, with poor attention on possible adaptive processes occurring later when the rate of improvement is reduced. Here, we tried to demonstrate that the strategy to select angle, speed and height at ball release can be managed during the learning periods following the performance stabilization. To this aim, we used a multivariate linear model with angle, speed and height as predictors of changes in accuracy. Participants performed underarm throws of a tennis ball to hit a target on the floor, 3.42 m away. Two training sessions (S1, S2) and one retention test were executed. Performance accuracy increased over the S1 and stabilized during the S2, with a rate of changes along the throwing axis slower than along the orthogonal axis. However, both the axes contributed to the performance changes over the learning and consolidation time. A stable relationship between the accuracy and the release parameters was observed only during S2, with a good fraction of the performance variance explained by the combination of speed and height. All the variations were maintained during the retention test. Overall, accuracy improvements and reduction in throwing complexity at the ball release followed separate timing over the course of learning and consolidation.
No evidence for unethical amnesia for imagined actions: A failed replication and extension.
Stanley, Matthew L; Yang, Brenda W; De Brigard, Felipe
2018-03-12
In a recent study, Kouchaki and Gino (2016) suggest that memory for unethical actions is impaired, regardless of whether such actions are real or imagined. However, as we argue in the current study, their claim that people develop "unethical amnesia" confuses two distinct and dissociable memory deficits: one affecting the phenomenology of remembering and another affecting memory accuracy. To further investigate whether unethical amnesia affects memory accuracy, we conducted three studies exploring unethical amnesia for imagined ethical violations. The first study (N = 228) attempts to directly replicate the only study from Kouchaki and Gino (2016) that includes a measure of memory accuracy. The second study (N = 232) attempts again to replicate these accuracy effects from Kouchaki and Gino (2016), while including several additional variables meant to potentially help in finding the effect. The third study (N = 228) is an attempted conceptual replication using the same paradigm as Kouchaki and Gino (2016), but with a new vignette describing a different moral violation. We did not find an unethical amnesia effect involving memory accuracy in any of our three studies. These results cast doubt upon the claim that memory accuracy is impaired for imagined unethical actions. Suggestions for further ways to study memory for moral and immoral actions are discussed.
ERIC Educational Resources Information Center
Cox, Philip L.
This material is an instructional unit on measuring and estimating. A variety of activities are used with manipulative devices, worksheets, and discussion questions included. Major topics are estimating lengths, accuracy of measurement, metric system, scale drawings, and conversion between different units. A teacher's guide is also available.…
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
Social power facilitates the effect of prosocial orientation on empathic accuracy.
Côté, Stéphane; Kraus, Michael W; Cheng, Bonnie Hayden; Oveis, Christopher; van der Löwe, Ilmo; Lian, Hua; Keltner, Dacher
2011-08-01
Power increases the tendency to behave in a goal-congruent fashion. Guided by this theoretical notion, we hypothesized that elevated power would strengthen the positive association between prosocial orientation and empathic accuracy. In 3 studies with university and adult samples, prosocial orientation was more strongly associated with empathic accuracy when distinct forms of power were high than when power was low. In Study 1, a physiological indicator of prosocial orientation, respiratory sinus arrhythmia, exhibited a stronger positive association with empathic accuracy in a face-to-face interaction among dispositionally high-power individuals. In Study 2, experimentally induced prosocial orientation increased the ability to accurately judge the emotions of a stranger but only for individuals induced to feel powerful. In Study 3, a trait measure of prosocial orientation was more strongly related to scores on a standard test of empathic accuracy among employees who occupied high-power positions within an organization. Study 3 further showed a mediated relationship between prosocial orientation and career satisfaction through empathic accuracy among employees in high-power positions but not among employees in lower power positions. Discussion concentrates upon the implications of these findings for studies of prosociality, power, and social behavior.
Figure correction of a metallic ellipsoidal neutron focusing mirror
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya
2015-06-15
An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less
Preliminary Results on Lunar Interior Properties from the GRAIL Mission
NASA Technical Reports Server (NTRS)
Williams, James G.; Konopliv, Alexander S.; Asmar, Sami W.; Lemoine, H. Jay; Melosh, H. Jay; Neumann, Gregory A.; Phillips, Roger J.; Smith, David E.; Solomon, Sean C.; Watkins, Michael M.;
2013-01-01
The Gravity Recovery and Interior Laboratory (GRAIL) mission has provided lunar gravity with unprecedented accuracy and resolution. GRAIL has produced a high-resolution map of the lunar gravity field while also determining tidal response. We present the latest gravity field solution and its preliminary implications for the Moon's interior structure, exploring properties such as the mean density, moment of inertia of the solid Moon, and tidal potential Love number k2. Lunar structure includes a thin crust, a deep mantle, a fluid core, and a suspected solid inner core. An accurate Love number mainly improves knowledge of the fluid core and deep mantle. In the future GRAIL will search for evidence of tidal dissipation and a solid inner core.
John Butcher and hybrid methods
NASA Astrophysics Data System (ADS)
Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif
2017-07-01
As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.
Tracking prominent points in image sequences
NASA Astrophysics Data System (ADS)
Hahn, Michael
1994-03-01
Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.
Microscale Determination of Vitamin C by Weight Titrimetry
NASA Astrophysics Data System (ADS)
East, Gaston A.; Nascimento, Erica C.
2002-01-01
A laboratory experiment involving the quantitative microscale determination of ascorbic acid in pharmaceutical tablets of vitamin C by weight-titrimetry using (diacetoxyiodo)benzene as titrant and differential electrolytic potentiometry to locate the end-point is presented. The experiment affords an opportunity for students to explore nonconventional techniques such as the use of a novel organic oxidimetric titrant, titration in a semiaqueous medium, gravimetric titration, and an electrometric method of end-point detection using polarized electrodes. Synthesis, purification, and purity checking of the titrant may also be included in the project. Some advantages of the method are very low reagent consumption, low-cost equipment, improved sensitivity, and high precision and accuracy. Furthermore, the experiment will help the student to relate chemical analysis to everyday life.
Energy reconstruction in the long-baseline neutrino experiment.
Mosel, U; Lalakulich, O; Gallmeister, K
2014-04-18
The Long-Baseline Neutrino Experiment aims at measuring fundamental physical parameters to high precision and exploring physics beyond the standard model. Nuclear targets introduce complications towards that aim. We investigate the uncertainties in the energy reconstruction, based on quasielastic scattering relations, due to nuclear effects. The reconstructed event distributions as a function of energy tend to be smeared out and shifted by several 100 MeV in their oscillatory structure if standard event selection is used. We show that a more restrictive experimental event selection offers the possibility to reach the accuracy needed for a determination of the mass ordering and the CP-violating phase. Quasielastic-based energy reconstruction could thus be a viable alternative to the calorimetric reconstruction also at higher energies.
Dual energy computed tomography for the head.
Naruto, Norihito; Itoh, Toshihide; Noguchi, Kyo
2018-02-01
Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three-material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducing beam-hardening effects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualize ischemic stroke in the brain without using iodine contrast medium.
Friend suggestion in social network based on user log
NASA Astrophysics Data System (ADS)
Kaviya, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
Simple friend recommendation algorithms such as similarity, popularity and social aspects is the basic requirement to be explored to methodically form high-performance social friend recommendation. Suggestion of friends is followed. No tags of character were followed. In the proposed system, we use an algorithm for network correlation-based social friend recommendation (NC-based SFR).It includes user activities like where one lives and works. A new friend recommendation method, based on network correlation, by considering the effect of different social roles. To model the correlation between different networks, we develop a method that aligns these networks through important feature selection. We consider by preserving the network structure for a more better recommendations so that it significantly improves the accuracy for better friend-recommendation.
Ashkenazi Jews and Breast Cancer: The Consequences of Linking Ethnic Identity to Genetic Disease
Brandt-Rauf, Sherry I.; Raveis, Victoria H.; Drummond, Nathan F.; Conte, Jill A.; Rothman, Sheila M.
2006-01-01
We explored the advantages and disadvantages of using ethnic categories in genetic research. With the discovery that certain breast cancer gene mutations appeared to be more prevalent in Ashkenazi Jews, breast cancer researchers moved their focus from high-risk families to ethnicity. The concept of Ashkenazi Jews as genetically unique, a legacy of Tay–Sachs disease research and a particular reading of history, shaped this new approach even as methodological imprecision and new genetic and historical research challenged it. Our findings cast doubt on the accuracy and desirability of linking ethnic groups to genetic disease. Such linkages exaggerate genetic differences among ethnic groups and lead to unequal access to testing and therapy. PMID:17018815
Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra
2017-07-01
This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.
Cognitive factors in the close visual and magnetic particle inspection of welds underwater.
Leach, J; Morris, P E
1998-06-01
Underwater close visual inspection (CVI) and magnetic particle inspection (MPI) are major components of the commercial diver's job of nondestructive testing and the maintenance of subsea structures. We explored the accuracy of CVI in Experiment 1 and that of MPI in Experiment 2 and observed high error rates (47% and 24%, respectively). Performance was strongly correlated with embedded figures and visual search tests and was unrelated to length of professional diving experience, formal inspection qualification, or age. Cognitive tests of memory for designs, spatial relations, dotted outlines, and block design failed to correlate with performance. Actual or potential applications of this research include more reliable inspection reporting, increased effectiveness from current inspection techniques, and directions for the refinement of subsea inspection equipment.
MOLA: The Future of Mars Global Cartography
NASA Technical Reports Server (NTRS)
Duxbury, T. C.; Smith, D. E.; Zuber, M. T.; Frey, H. V.; Garvin, J. B.; Head, J. W.; Muhleman, D. O.; Pettengill, G. H.; Phillips, R. J.; Solomon, S. C.
1999-01-01
The MGS Orbiter is carrying the high-precision Mars Orbiter Laser Altimeter (MOLA) which, when combined with precision reconstructed orbital data and telemetered attitude data, provides a tie between inertial space and Mars-fixed coordinates to an accuracy of 100 m in latitude / longitude and 10 m in radius (1 sigma), orders of magnitude more accurate than previous global geodetic/ cartographic control data. Over the 2 year MGS mission lifetime, it is expected that over 30,000 MOLA Global Cartographic Control Points will be produced to form the basis for new and re-derived map and geodetic products, key to the analysis of existing and evolving MGS data as well as future Mars exploration. Additional information is contained in the original extended abstract.
Atmospheric Fluorescence Yield
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Christl, M. J.; Fountain, W. F.; Gregory, J. C.; Martens, K.; Sokolsky, P.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Several existing and planned experiments estimate the energies of ultra-high energy cosmic rays from air showers using the atmospheric fluorescence from these showers. Accurate knowledge of the conversion from atmospheric fluorescence to energy loss by ionizing particles in the atmosphere is key to this technique. In this paper we discuss a small balloon-borne instrument to make the first in situ measurements versus altitude of the atmospheric fluorescence yield. The instrument can also be used in the lab to investigate the dependence of the fluorescence yield in air on temperature, pressure and the concentrations of other gases that present in the atmosphere. The results can be used to explore environmental effects on and improve the accuracy of cosmic ray energy measurements for existing ground-based experiments and future space-based experiments.
Extended bounds limiter for high-order finite-volume schemes on unstructured meshes
NASA Astrophysics Data System (ADS)
Tsoutsanis, Panagiotis
2018-06-01
This paper explores the impact of the definition of the bounds of the limiter proposed by Michalak and Ollivier-Gooch in [56] (2009), for higher-order Monotone-Upstream Central Scheme for Conservation Laws (MUSCL) numerical schemes on unstructured meshes in the finite-volume (FV) framework. A new modification of the limiter is proposed where the bounds are redefined by utilising all the spatial information provided by all the elements in the reconstruction stencil. Numerical results obtained on smooth and discontinuous test problems of the Euler equations on unstructured meshes, highlight that the newly proposed extended bounds limiter exhibits superior performance in terms of accuracy and mesh sensitivity compared to the cell-based or vertex-based bounds implementations.
Portable oil bath for high-accuracy resistance transfer and maintenance
NASA Astrophysics Data System (ADS)
Shiota, Fuyuhiko
1999-10-01
A portable oil bath containing one standard resistor for high-accuracy resistance transfer and maintenance was developed and operated for seven years in the National Research Laboratory of Metrology. The aim of the bath is to save labor and apparatus for high-accuracy resistance transfer and maintenance by consistently keeping the standard resistor in an optimum environmental condition. The details of the prototype system, including its performance, are described together with some suggestions for a more practical bath design, which adopts the same concept.
Development of CFRP mirrors for space telescopes
NASA Astrophysics Data System (ADS)
Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo
2013-09-01
CFRP (Caron fiber reinforced plastics) have superior properties of high specific elasticity and low thermal expansion for satellite telescope structures. However, difficulties to achieve required surface accuracy and to ensure stability in orbit have discouraged CFRP application as main mirrors. We have developed ultra-light weight and high precision CFRP mirrors of sandwich structures composed of CFRP skins and CFRP cores using a replica technique. Shape accuracy of the demonstrated mirrors of 150 mm in diameter was 0.8 μm RMS (Root Mean Square) and surface roughness was 5 nm RMS as fabricated. Further optimization of fabrication process conditions to improve surface accuracy was studied using flat sandwich panels. Then surface accuracy of the flat CFRP sandwich panels of 150 mm square was improved to flatness of 0.2 μm RMS with surface roughness of 6 nm RMS. The surface accuracy vs. size of trial models indicated high possibility of fabrication of over 1m size mirrors with surface accuracy of 1μm. Feasibility of CFRP mirrors for low temperature applications was examined for JASMINE project as an example. Stability of surface accuracy of CFRP mirrors against temperature and moisture was discussed.
3D Printed Shock Mitigating Structures
NASA Astrophysics Data System (ADS)
Schrand, Amanda; Elston, Edwin; Dennis, Mitzi; Metroke, Tammy; Chen, Chenggang; Patton, Steven; Ganguli, Sabyasachi; Roy, Ajit
Here we explore the durability, and shock mitigating potential, of solid and cellular 3D printed polymers and conductive inks under high strain rate, compressive shock wave and high g acceleration conditions. Our initial designs include a simple circuit with 4 resistors embedded into circular discs and a complex cylindrical gyroid shape. A novel ink consisting of silver-coated carbon black nanoparticles in a thermoplastic polyurethane was used as the trace material. One version of the disc structural design has the advantage of allowing disassembly after testing for direct failure analysis. After increasing impacts, printed and traditionally potted circuits were examined for functionality. Additionally, in the open disc design, trace cracking and delamination of resistors were able to be observed. In a parallel study, we examined the shock mitigating behavior of 3D printed cellular gyroid structures on a Split Hopkinson Pressure Bar (SHPB). We explored alterations to the classic SHPB setup for testing the low impedance, cellular samples to most accurately reflect the stress state inside the sample (strain rates from 700 to 1750 s-1). We discovered that the gyroid can effectively absorb the impact of the test resulting in crushing the structure. Future studies aim to tailor the unit cell dimensions for certain frequencies, increase print accuracy and optimize material compositions for conductivity and adhesion to manufacture more durable devices.
Larson, Wesley A; Seeb, Lisa W; Everett, Meredith V; Waples, Ryan K; Templin, William D; Seeb, James E
2014-01-01
Recent advances in population genomics have made it possible to detect previously unidentified structure, obtain more accurate estimates of demographic parameters, and explore adaptive divergence, potentially revolutionizing the way genetic data are used to manage wild populations. Here, we identified 10 944 single-nucleotide polymorphisms using restriction-site-associated DNA (RAD) sequencing to explore population structure, demography, and adaptive divergence in five populations of Chinook salmon (Oncorhynchus tshawytscha) from western Alaska. Patterns of population structure were similar to those of past studies, but our ability to assign individuals back to their region of origin was greatly improved (>90% accuracy for all populations). We also calculated effective size with and without removing physically linked loci identified from a linkage map, a novel method for nonmodel organisms. Estimates of effective size were generally above 1000 and were biased downward when physically linked loci were not removed. Outlier tests based on genetic differentiation identified 733 loci and three genomic regions under putative selection. These markers and genomic regions are excellent candidates for future research and can be used to create high-resolution panels for genetic monitoring and population assignment. This work demonstrates the utility of genomic data to inform conservation in highly exploited species with shallow population structure. PMID:24665338
Summary of the Results from the Lunar Orbiter Laser Altimeter after Seven Years in Lunar Orbit
NASA Technical Reports Server (NTRS)
Smith, David E.; Zuber, Maria T.; Neumann, Gregory A.; Mazarico, Erwan; Lemoine, Frank G.; Head, James W., III; Lucey, Paul G.; Aharonson, Oded; Robinson, Mark S.; Sun, Xiaoli;
2016-01-01
In June 2009 the Lunar Reconnaissance Orbiter (LRO) spacecraft was launched to the Moon. The payload consists of 7 science instruments selected to characterize sites for future robotic and human missions. Among them, the Lunar Orbiter Laser Altimeter (LOLA) was designed to obtain altimetry, surface roughness, and reflectance measurements. The primary phase of lunar exploration lasted one year, following a 3-month commissioning phase. On completion of its exploration objectives, the LRO mission transitioned to a science mission. After 7 years in lunar orbit, the LOLA instrument continues to map the lunar surface. The LOLA dataset is one of the foundational datasets acquired by the various LRO instruments. LOLA provided a high-accuracy global geodetic reference frame to which past, present and future lunar observations can be referenced. It also obtained high-resolution and accurate global topography that were used to determine regions in permanent shadow at the lunar poles. LOLA further contributed to the study of polar volatiles through its unique measurement of surface brightness at zero phase, which revealed anomalies in several polar craters that may indicate the presence of water ice. In this paper, we describe the many LOLA accomplishments to date and its contribution to lunar and planetary science.
Paliwal, Himanshu; Shirts, Michael R
2013-11-12
Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.
Evaluation of microRNA alignment techniques
Kaspi, Antony; El-Osta, Assam
2016-01-01
Genomic alignment of small RNA (smRNA) sequences such as microRNAs poses considerable challenges due to their short length (∼21 nucleotides [nt]) as well as the large size and complexity of plant and animal genomes. While several tools have been developed for high-throughput mapping of longer mRNA-seq reads (>30 nt), there are few that are specifically designed for mapping of smRNA reads including microRNAs. The accuracy of these mappers has not been systematically determined in the case of smRNA-seq. In addition, it is unknown whether these aligners accurately map smRNA reads containing sequence errors and polymorphisms. By using simulated read sets, we determine the alignment sensitivity and accuracy of 16 short-read mappers and quantify their robustness to mismatches, indels, and nontemplated nucleotide additions. These were explored in the context of a plant genome (Oryza sativa, ∼500 Mbp) and a mammalian genome (Homo sapiens, ∼3.1 Gbp). Analysis of simulated and real smRNA-seq data demonstrates that mapper selection impacts differential expression results and interpretation. These results will inform on best practice for smRNA mapping and enable more accurate smRNA detection and quantification of expression and RNA editing. PMID:27284164
Calculation of phonon dispersion relation using new correlation functional
NASA Astrophysics Data System (ADS)
Jitropas, Ukrit; Hsu, Chung-Hao
2017-06-01
To extend the use of Local Density Approximation (LDA), a new analytical correlation functional is introduced. Correlation energy is an essential ingredient within density functional theory and used to determine ground state energy and other properties including phonon dispersion relation. Except for high and low density limit, the general expression of correlation energy is unknown. The approximation approach is therefore required. The accuracy of the modelling system depends on the quality of correlation energy approximation. Typical correlation functionals used in LDA such as Vosko-Wilk-Nusair (VWN) and Perdew-Wang (PW) were obtained from parameterizing the near-exact quantum Monte Carlo data of Ceperley and Alder. These functionals are presented in complex form and inconvenient to implement. Alternatively, the latest published formula of Chachiyo correlation functional provides a comparable result for those much more complicated functionals. In addition, it provides more predictive power based on the first principle approach, not fitting functionals. Nevertheless, the performance of Chachiyo formula for calculating phonon dispersion relation (a key to the thermal properties of materials) has not been tested yet. Here, the implementation of new correlation functional to calculate phonon dispersion relation is initiated. The accuracy and its validity will be explored.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Fast and effective characterization of 3D region of interest in medical image data
NASA Astrophysics Data System (ADS)
Kontos, Despina; Megalooikonomou, Vasileios
2004-05-01
We propose a framework for detecting, characterizing and classifying spatial Regions of Interest (ROIs) in medical images, such as tumors and lesions in MRI or activation regions in fMRI. A necessary step prior to classification is efficient extraction of discriminative features. For this purpose, we apply a characterization technique especially designed for spatial ROIs. The main idea of this technique is to extract a k-dimensional feature vector using concentric spheres in 3D (or circles in 2D) radiating out of the ROI's center of mass. These vectors form characterization signatures that can be used to represent the initial ROIs. We focus on classifying fMRI ROIs obtained from a study that explores neuroanatomical correlates of semantic processing in Alzheimer's disease (AD). We detect a ROI highly associated with AD and apply the feature extraction technique with different experimental settings. We seek to distinguish control from patient samples. We study how classification can be performed using the extracted signatures as well as how different experimental parameters affect classification accuracy. The obtained classification accuracy ranged from 82% to 87% (based on the selected ROI) suggesting that the proposed classification framework can be potentially useful in supporting medical decision-making.
Advancing Venus Geophysics with the NF4 VOX Gravity Investigation.
NASA Astrophysics Data System (ADS)
Iess, L.; Mazarico, E.; Andrews-Hanna, J. C.; De Marchi, F.; Di Achille, G.; Di Benedetto, M.; Smrekar, S. E.
2017-12-01
The Venus Origins Explorer is a JPL-led New Frontiers 4 mission proposal to Venus to answer critical questions about the origin and evolution of Venus. Venus stands out among other planets as Earth's twin planet, and is a natural target to better understand our own planet's place, in our own Solar System but also among the ever-increasing number of exoplanetary systems. The VOX radio science investigation will make use of an innovative Ka-band transponder provided by the Italian Space Agency (ASI) to map the global gravity field of Venus to much finer resolution and accuracy than the current knowledge, based on the NASA Magellan mission. We will present the results of comprehensive simulations performed with the NASA GSFC orbit determination and geodetic parameter estimation software `GEODYN', based on a realistic mission scenario, tracking schedule, and high-fidelity Doppler tracking noise model. We will show how the achieved resolution and accuracy help fulfill the geophysical goals of the VOX mission, in particular through the mapping of subsurface crustal density or thickness variations that will inform the composition and origin of the tesserae and help ascertain the heat loss and importance of tectonism and subduction.
Van Norman, Ethan R; Nelson, Peter M; Klingbeil, David A
2017-09-01
Educators need recommendations to improve screening practices without limiting students' instructional opportunities. Repurposing previous years' state test scores has shown promise in identifying at-risk students within multitiered systems of support. However, researchers have not directly compared the diagnostic accuracy of previous years' state test scores with data collected during fall screening periods to identify at-risk students. In addition, the benefit of using previous state test scores in conjunction with data from a separate measure to identify at-risk students has not been explored. The diagnostic accuracy of 3 types of screening approaches were tested to predict proficiency on end-of-year high-stakes assessments: state test data obtained during the previous year, data from a different measure administered in the fall, and both measures combined (i.e., a gated model). Extant reading and math data (N = 2,996) from 10 schools in the Midwest were analyzed. When used alone, both measures yielded similar sensitivity and specificity values. The gated model yielded superior specificity values compared with using either measure alone, at the expense of sensitivity. Implications, limitations, and ideas for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Multivariate prediction of upper limb prosthesis acceptance or rejection.
Biddiss, Elaine A; Chau, Tom T
2008-07-01
To develop a model for prediction of upper limb prosthesis use or rejection. A questionnaire exploring factors in prosthesis acceptance was distributed internationally to individuals with upper limb absence through community-based support groups and rehabilitation hospitals. A total of 191 participants (59 prosthesis rejecters and 132 prosthesis wearers) were included in this study. A logistic regression model, a C5.0 decision tree, and a radial basis function neural network were developed and compared in terms of sensitivity (prediction of prosthesis rejecters), specificity (prediction of prosthesis wearers), and overall cross-validation accuracy. The logistic regression and neural network provided comparable overall accuracies of approximately 84 +/- 3%, specificity of 93%, and sensitivity of 61%. Fitting time-frame emerged as the predominant predictor. Individuals fitted within two years of birth (congenital) or six months of amputation (acquired) were 16 times more likely to continue prosthesis use. To increase rates of prosthesis acceptance, clinical directives should focus on timely, client-centred fitting strategies and the development of improved prostheses and healthcare for individuals with high-level or bilateral limb absence. Multivariate analyses are useful in determining the relative importance of the many factors involved in prosthesis acceptance and rejection.
Toward Supersonic Retropropulsion CFD Validation
NASA Technical Reports Server (NTRS)
Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl
2011-01-01
This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
Lang, Shona; Armstrong, Nigel; Deshpande, Sohan; Ramaekers, Bram; Grimm, Sabine; de Kock, Shelley; Kleijnen, Jos; Westwood, Marie
2018-01-01
Objective To explore how the definition of the target condition and post hoc exclusion of participants can limit the usefulness of diagnostic accuracy studies. Methods We used data from a systematic review, conducted for a NICE diagnostic assessment of risk scores to inform secondary care decisions about specialist referral for women with suspected ovarian cancer, to explore how the definition of the target condition and post hoc exclusion of participants can limit the usefulness of diagnostic accuracy studies to inform clinical practice. Results Fourteen of the studies evaluated the ROMA score, nine used Abbott ARCHITECT tumour marker assays, five used Roche Elecsys. The summary sensitivity estimate (Abbott ARCHITECT) was highest, 95.1% (95% CI: 92.4 to 97.1%), where analyses excluded participants with borderline tumours or malignancies other than epithelial ovarian cancer and lowest, 75.0% (95% CI: 60.4 to 86.4%), where all participants were included. Results were similar for Roche Elecsys tumour marker assays. Although the number of patients involved was small, data from studies that reported diagnostic accuracy for both the whole study population and with post hoc exclusion of those with borderline or non-epithelial malignancies suggested that patients with borderline or malignancies other than epithelial ovarian cancer accounts for between 50 and 85% of false-negative ROMA scores. Conclusions Our results illustrate the potential consequences of inappropriate population selection in diagnostic studies; women with non-epithelial ovarian cancers or non-ovarian primaries, and those borderline tumours may be disproportionately represented among those with false negative, 'low risk' ROMA scores. These observations highlight the importance of giving careful consideration to how the target condition has been defined when assessing whether the diagnostic accuracy estimates reported in clinical studies will translate into clinical utility in real-world settings.
Geothermal Play-Fairway Analysis of the Tatun Volcano Group, Taiwan
NASA Astrophysics Data System (ADS)
Chen, Yan-Ru; Song, Sheng-Rong
2017-04-01
Geothermal energy is a sustainable and low-emission energy resource. It has the advantage of low-cost and withstanding nature hazards. Taiwan is located on the western Ring of Fire and characteristic of widespread hot spring and high surface heat flows, especially on the north of Taiwan. Many previous studies reveal that the Tatun Volcano Group (TVG) has great potential to develop the geothermal energy. However, investment in geothermal development has inherent risk and how to reduce the exploration risk is the most important. The exploration risk can be lowered by using the play-fairway analysis (PFA) that integrates existing data representing the composite risk segments in the region in order to define the exploration strategy. As a result, this study has adapted this logic for geothermal exploration in TVG. There are two necessary factors in geothermal energy, heat and permeability. They are the composite risk segments for geothermal play-fairway analysis. This study analyzes existing geologic, geophysical and geochemical data to construct the heat and permeability potential models. Heat potential model is based on temperature gradient, temperature of hot spring, proximity to hot spring, hydrothermal alteration zones, helium isotope ratios, and magnetics. Permeability potential model is based on fault zone, minor fault, and micro-earthquake activities. Then, these two potential models are weighted by using the Analytical Hierarchy Process (AHP) and combined to rank geothermal favorability. Uncertainty model is occurred by the quality of data and spatial accuracy of data. The goal is to combine the potential model with the uncertainty model as a risk map to find the best drilling site for geothermal exploration in TVG. Integrated results indicate where geothermal potential is the highest and provide the best information for those who want to develop the geothermal exploration in TVG.
Effects of atmospheric variations on acoustic system performance
NASA Technical Reports Server (NTRS)
Nation, Robert; Lang, Stephen; Olsen, Robert; Chintawongvanich, Prasan
1993-01-01
Acoustic propagation over medium to long ranges in the atmosphere is subject to many complex, interacting effects. Of particular interest at this point is modeling low frequency (less than 500 Hz) propagation for the purpose of predicting ranges and bearing accuracies at which acoustic sources can be detected. A simple means of estimating how much of the received signal power propagated directly from the source to the receiver and how much was received by turbulent scattering was developed. The correlations between the propagation mechanism and detection thresholds, beamformer bearing estimation accuracies, and beamformer processing gain of passive acoustic signal detection systems were explored.
Rusu, Mirabela; Birmanns, Stefan
2010-04-01
A structural characterization of multi-component cellular assemblies is essential to explain the mechanisms governing biological function. Macromolecular architectures may be revealed by integrating information collected from various biophysical sources - for instance, by interpreting low-resolution electron cryomicroscopy reconstructions in relation to the crystal structures of the constituent fragments. A simultaneous registration of multiple components is beneficial when building atomic models as it introduces additional spatial constraints to facilitate the native placement inside the map. The high-dimensional nature of such a search problem prevents the exhaustive exploration of all possible solutions. Here we introduce a novel method based on genetic algorithms, for the efficient exploration of the multi-body registration search space. The classic scheme of a genetic algorithm was enhanced with new genetic operations, tabu search and parallel computing strategies and validated on a benchmark of synthetic and experimental cryo-EM datasets. Even at a low level of detail, for example 35-40 A, the technique successfully registered multiple component biomolecules, measuring accuracies within one order of magnitude of the nominal resolutions of the maps. The algorithm was implemented using the Sculptor molecular modeling framework, which also provides a user-friendly graphical interface and enables an instantaneous, visual exploration of intermediate solutions. (c) 2009 Elsevier Inc. All rights reserved.
Assaf, Tareq; Roke, Calum; Rossiter, Jonathan; Pipe, Tony; Melhuish, Chris
2014-02-07
Effective tactile sensing for artificial platforms remains an open issue in robotics. This study investigates the performance of a soft biologically-inspired artificial fingertip in active exploration tasks. The fingertip sensor replicates the mechanisms within human skin and offers a robust solution that can be used both for tactile sensing and gripping/manipulating objects. The softness of the optical sensor's contact surface also allows safer interactions with objects. High-level tactile features such as edges are extrapolated from the sensor's output and the information is used to generate a tactile image. The work presented in this paper aims to investigate and evaluate this artificial fingertip for 2D shape reconstruction. The sensor was mounted on a robot arm to allow autonomous exploration of different objects. The sensor and a number of human participants were then tested for their abilities to track the raised perimeters of different planar objects and compared. By observing the technique and accuracy of the human subjects, simple but effective parameters were determined in order to evaluate the artificial system's performance. The results prove the capability of the sensor in such active exploration tasks, with a comparable performance to the human subjects despite it using tactile data alone whereas the human participants were also able to use proprioceptive cues.
What do we mean by accuracy in geomagnetic measurements?
Green, A.W.
1990-01-01
High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.
Object-based vegetation classification with high resolution remote sensing imagery
NASA Astrophysics Data System (ADS)
Yu, Qian
Vegetation species are valuable indicators to understand the earth system. Information from mapping of vegetation species and community distribution at large scales provides important insight for studying the phenological (growth) cycles of vegetation and plant physiology. Such information plays an important role in land process modeling including climate, ecosystem and hydrological models. The rapidly growing remote sensing technology has increased its potential in vegetation species mapping. However, extracting information at a species level is still a challenging research topic. I proposed an effective method for extracting vegetation species distribution from remotely sensed data and investigated some ways for accuracy improvement. The study consists of three phases. Firstly, a statistical analysis was conducted to explore the spatial variation and class separability of vegetation as a function of image scale. This analysis aimed to confirm that high resolution imagery contains the information on spatial vegetation variation and these species classes can be potentially separable. The second phase was a major effort in advancing classification by proposing a method for extracting vegetation species from high spatial resolution remote sensing data. The proposed classification employs an object-based approach that integrates GIS and remote sensing data and explores the usefulness of ancillary information. The whole process includes image segmentation, feature generation and selection, and nearest neighbor classification. The third phase introduces a spatial regression model for evaluating the mapping quality from the above vegetation classification results. The effects of six categories of sample characteristics on the classification uncertainty are examined: topography, sample membership, sample density, spatial composition characteristics, training reliability and sample object features. This evaluation analysis answered several interesting scientific questions such as (1) whether the sample characteristics affect the classification accuracy and how significant if it does; (2) how much variance of classification uncertainty can be explained by above factors. This research is carried out on a hilly peninsular area in Mediterranean climate, Point Reyes National Seashore (PRNS) in Northern California. The area mainly consists of a heterogeneous, semi-natural broadleaf and conifer woodland, shrub land, and annual grassland. A detailed list of vegetation alliances is used in this study. Research results from the first phase indicates that vegetation spatial variation as reflected by the average local variance (ALV) keeps a high level of magnitude between 1 m and 4 m resolution. (Abstract shortened by UMI.)
Angel, Lucie; Bastin, Christine; Genon, Sarah; Salmon, Eric; Fay, Séverine; Balteau, Evelyne; Maquet, Pierre; Luxen, André; Isingrini, Michel; Collette, Fabienne
2016-01-15
The current experiment aimed to explore age differences in brain activity associated with successful memory retrieval in older adults with different levels of executive functioning, at different levels of task demand. Memory performance and fMRI activity during a recognition task were compared between a young group and two older groups characterized by a low (old-low group) vs. high (old-high group) level of executive functioning. Participants first encoded pictures, presented once (Hard condition) or twice (Easy condition), and then completed a recognition memory task. Old-low adults had poorer memory performance than the two other groups, which did not differ, in both levels of task demands. In the Easy condition, even though older adults demonstrated reduced activity compared to young adults in several regions, they also showed additional activations in the right superior frontal gyrus and right parietal lobule (positively correlated to memory accuracy) for the old-high group and in the right precuneus (negatively correlated to memory accuracy), right anterior cingulate gyrus and right supramarginal gyrus for the old-low group. In the Hard condition, some regions were also more activated in the young group than in the older groups. Vice versa, old-high participants demonstrated more activity than either the young or the old-low group in the right frontal gyrus, associated with more accurate memory performance, and in the left frontal gyrus. In sum, the present study clearly showed that age differences in the neural correlates of retrieval success were modulated by task difficulty, as suggested by the CRUNCH model, but also by interindividual variability, in particular regarding executive functioning. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kedar, S.; Bock, Y.; Webb, F.; Clayton, R. W.; Owen, S. E.; Moore, A. W.; Yu, E.; Dong, D.; Fang, P.; Jamason, P.; Squibb, M. B.; Crowell, B. W.
2010-12-01
In situ geodetic networks for observing crustal motion have proliferated over the last two decades and are now recognized as indispensable tools in geophysical research, along side more traditional seismic networks. The 2007 National Research Council’s Decadal Survey recognizes that space-borne and in situ observations, such as Interferometric Synthetic Aperture Radar (InSAR) and ground-based continuous GPS (CGPS) are complementary in forecasting, in assessing, and in mitigating natural hazards. However, the information content and timeliness of in situ geodetic observations have not been fully exploited, particularly at higher frequencies than traditional daily CGPS position time series. Nor have scientists taken full advantage of the complementary natures of geodetic and seismic data, as well as those of space-based and in situ observations. To address these deficits we are developing real-time CGPS data products for earthquake early warning and for space-borne deformation measurement mission support. Our primary mission objective is in situ verification and validation for DESDynI, but our work is also applicable to other international missions (Sentinel 1a/1b, SAOCOM, ALOS 2). Our project is developing new capabilities to continuously observe and mitigate earthquake-related hazards (direct seismic damage, tsunamis, landslides, volcanoes) in near real-time with high spatial-temporal resolution, to improve the planning and accuracy of space-borne observations. We also are using GPS estimates of tropospheric zenith delay combined with water vapor data from weather models to generate tropospheric calibration maps for mitigating the largest source of error, atmospheric artifacts, in InSAR interferograms. These functions will be fully integrated into a Geophysical Resource Web Services and interactive GPS Explorer data portal environment being developed as part of an ongoing MEaSUREs project and NASA’s contribution to the EarthScope project. GPS Explorer, originally designed for web-based dissemination of long-term Solid Earth Science Data Records (ESDR’s) such as deformation time series, tectonic velocity vectors, and strain maps, provides the framework for seamless inclusion of the high rate data products. Detection and preliminary modeling of interesting signals by dense real-time high-rate ground networks will allow mission planners and decision makers to fully exploit the less-frequent but higher resolution InSAR observations. Fusion of in situ elements into an advanced observation system will significantly improve the scientific value of extensive surface displacement data, provide scientists with improved access to modern software tools to manipulate and model these data, increase the data’s accuracy and timeliness at higher frequencies than available from space-based observations, and increase the accuracy of space-based observations through calibration of atmospheric and other systematic errors.
Schiff, Rachel
2012-12-01
The present study explored the speed, accuracy, and reading comprehension of vowelized versus unvowelized scripts among 126 native Hebrew speaking children in second, fourth, and sixth grades. Findings indicated that second graders read and comprehended vowelized scripts significantly more accurately and more quickly than unvowelized scripts, whereas among fourth and sixth graders reading of unvowelized scripts developed to a greater degree than the reading of vowelized scripts. An analysis of the mediation effect for children's mastery of vowelized reading speed and accuracy on their mastery of unvowelized reading speed and comprehension revealed that in second grade, reading accuracy of vowelized words mediated the reading speed and comprehension of unvowelized scripts. In the fourth grade, accuracy in reading both vowelized and unvowelized words mediated the reading speed and comprehension of unvowelized scripts. By sixth grade, accuracy in reading vowelized words offered no mediating effect, either on reading speed or comprehension of unvowelized scripts. The current outcomes thus suggest that young Hebrew readers undergo a scaffolding process, where vowelization serves as the foundation for building initial reading abilities and is essential for successful and meaningful decoding of unvowelized scripts.
NASA Astrophysics Data System (ADS)
Li, Rong; Zhao, Jianhui; Li, Fan
2009-07-01
Gyroscope used as surveying sensor in the oil industry has been proposed as a good technique for measurement-whiledrilling (MWD) to provide real-time monitoring of the position and the orientation of the bottom hole assembly (BHA).However, drifts in the measurements provided by gyroscope might be prohibitive for the long-term utilization of the sensor. Some usual methods such as zero velocity update procedure (ZUPT) introduced to limit these drifts seem to be time-consuming and with limited effect. This study explored an in-drilling dynamic -alignment (IDA) method for MWD which utilizes gyroscope. During a directional drilling process, there are some minutes in the rotary drilling mode when the drill bit combined with drill pipe are rotated about the spin axis in a certain speed. This speed can be measured and used to determine and limit some drifts of the gyroscope which pay great effort to the deterioration in the long-term performance. A novel laser assembly is designed on the wellhead to count the rotating cycles of the drill pipe. With this provided angular velocity of the drill pipe, drifts of gyroscope measurements are translated into another form that can be easy tested and compensated. That allows better and faster alignment and limited drifts during the navigation process both of which can reduce long-term navigation errors, thus improving the overall accuracy in INS-based MWD system. This article concretely explores the novel device on the wellhead designed to test the rotation of the drill pipe. It is based on laser testing which is simple and not expensive by adding a laser emitter to the existing drilling equipment. Theoretical simulations and analytical approximations exploring the IDA idea have shown improvement in the accuracy of overall navigation and reduction in the time required to achieve convergence. Gyroscope accuracy along the axis is mainly improved. It is suggested to use the IDA idea in the rotary mode for alignment. Several other practical aspects of implementing this approach are evaluated and compared.
[High Precision Identification of Igneous Rock Lithology by Laser Induced Breakdown Spectroscopy].
Wang, Chao; Zhang, Wei-gang; Yan, Zhi-quan
2015-09-01
In the field of petroleum exploration, lithology identification of finely cuttings sample, especially high precision identification of igneous rock with similar property, has become one of the geological problems. In order to solve this problem, a new method is proposed based on element analysis of Laser-Induced Breakdown Spectroscopy (LIBS) and Total Alkali versus Silica (TAS) diagram. Using independent LIBS system, factors influencing spectral signal, such as pulse energy, acquisition time delay, spectrum acquisition method and pre-ablation are researched through contrast experiments systematically. The best analysis conditions of igneous rock are determined: pulse energy is 50 mJ, acquisition time delay is 2 μs, the analysis result is integral average of 20 different points of sample's surface, and pre-ablation has been proved not suitable for igneous rock sample by experiment. The repeatability of spectral data is improved effectively. Characteristic lines of 7 elements (Na, Mg, Al, Si, K, Ca, Fe) commonly used for lithology identification of igneous rock are determined, and igneous rock samples of different lithology are analyzed and compared. Calibration curves of Na, K, Si are generated by using national standard series of rock samples, and all the linearly dependent coefficients are greater than 0.9. The accuracy of quantitative analysis is investigated by national standard samples. Element content of igneous rock is analyzed quantitatively by calibration curve, and its lithology is identified accurately by the method of TAS diagram, whose accuracy rate is 90.7%. The study indicates that LIBS can effectively achieve the high precision identification of the lithology of igneous rock.
Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji
2008-04-28
The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
Søreide, K; Thorsen, K; Søreide, J A
2015-02-01
Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.
Preschoolers Mistrust Ignorant and Inaccurate Speakers
ERIC Educational Resources Information Center
Koenig, Melissa A.; Harris, Paul L.
2005-01-01
Being able to evaluate the accuracy of an informant is essential to communication. Three experiments explored preschoolers' (N=119) understanding that, in cases of conflict, information from reliable informants is preferable to information from unreliable informants. In Experiment 1, children were presented with previously accurate and inaccurate…
Orff Ensembles: Benefits, Challenges, and Solutions
ERIC Educational Resources Information Center
Taylor, Donald M.
2012-01-01
Playing Orff instruments provides students with a wide variety of opportunities to explore creative musicianship. This article examines the benefits of classroom instrument study, common challenges encountered, and viable teaching strategies to promote student success. The ability to remove notes from barred instruments makes note accuracy more…
area, which includes work on whole building energy modeling, cost-based optimization, model accuracy optimization tool used to provide support for the Building America program's teams and energy efficiency goals Colorado graduate student exploring enhancements to building optimization in terms of robustness and speed
NASA Astrophysics Data System (ADS)
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
van Dijken, Bart R J; van Laar, Peter Jan; Holtman, Gea A; van der Hoorn, Anouk
2017-10-01
Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. • Treatment response assessment in high-grade gliomas with anatomical MRI is unreliable • Novel advanced MRI techniques have been studied, but diagnostic accuracy is unknown • Meta-analysis demonstrates that advanced MRI showed higher diagnostic accuracy than anatomical MRI • Highest diagnostic accuracy for spectroscopy and perfusion MRI • Supports the incorporation of advanced MRI in high-grade glioma treatment response assessment.
Accuracy of Remotely Sensed Classifications For Stratification of Forest and Nonforest Lands
Raymond L. Czaplewski; Paul L. Patterson
2001-01-01
We specify accuracy standards for remotely sensed classifications used by FIA to stratify landscapes into two categories: forest and nonforest. Accuracy must be highest when forest area approaches 100 percent of the landscape. If forest area is rare in a landscape, then accuracy in the nonforest stratum must be very high, even at the expense of accuracy in the forest...
Space astrometry project JASMINE
NASA Astrophysics Data System (ADS)
Gouda, N.; Kobayashi, Y.; Yamada, Y.; Yano, Y.; Jasmine Working Group
A Japanese plan for an infrared ( z-band: 0.9 m) space astrometry project, JASMINE, is introduced. JASMINE is a satellite (Japan Astrometry Satellite Mission for INfrared Exploration) to measure distances and apparent motions of stars in the bulge of the Milky Way with yet unprecedented precision. It will measure parallaxes and positions with an accuracy of 10 μarcsec and proper motions with an accuracy of 4 μarcsec/year for stars brighter than z = 14 mag. JASMINE will observe about 10 million stars belonging to the bulge component of our Galaxy. With a completely new "map of the Galactic bulge", it is expected that many new exciting scientific results will be obtained in various fields of astronomy. Presently, JASMINE is in the development phase, with a target launch date around 2015. Overall system (bus) design is presently ongoing, in cooperation with the Japanese Aerospace Exploration Agency (JAXA). Preliminary design of instruments, observing strategy, data reduction, and critical technical issues for JASMINE will be described.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-01-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517
NASA Technical Reports Server (NTRS)
Sozer, Emre; Brehm, Christoph; Kiris, Cetin C.
2014-01-01
A survey of gradient reconstruction methods for cell-centered data on unstructured meshes is conducted within the scope of accuracy assessment. Formal order of accuracy, as well as error magnitudes for each of the studied methods, are evaluated on a complex mesh of various cell types through consecutive local scaling of an analytical test function. The tests highlighted several gradient operator choices that can consistently achieve 1st order accuracy regardless of cell type and shape. The tests further offered error comparisons for given cell types, leading to the observation that the "ideal" gradient operator choice is not universal. Practical implications of the results are explored via CFD solutions of a 2D inviscid standing vortex, portraying the discretization error properties. A relatively naive, yet largely unexplored, approach of local curvilinear stencil transformation exhibited surprisingly favorable properties
NASA Astrophysics Data System (ADS)
Liu, Yu-fang; Han, Xin; Shi, De-heng
2008-03-01
Based on the Kirchhoff's Law, a practical dual-wavelength fiber-optic colorimeter, with the optimal work wavelength centered at 2.1 μm and 2.3 μm is presented. The effect of the emissivity on the precision of the measured temperature has been explored under various circumstances (i.e. temperature, wavelength) and for different materials. In addition, by fitting several typical material emissivity-temperature dependencies curves, the influence of the irradiation (radiant flux originating from the surroundings) and the surface reflected radiation on the temperature accuracy is studied. The results show that the calibration of the measured temperature for reflected radiant energy is necessary especially in low target temperature or low target emissivity, and the temperature accuracy is suitable for requirements in the range of 400-1200K.
Verification of OpenSSL version via hardware performance counters
NASA Astrophysics Data System (ADS)
Bruska, James; Blasingame, Zander; Liu, Chen
2017-05-01
Many forms of malware and security breaches exist today. One type of breach downgrades a cryptographic program by employing a man-in-the-middle attack. In this work, we explore the utilization of hardware events in conjunction with machine learning algorithms to detect which version of OpenSSL is being run during the encryption process. This allows for the immediate detection of any unknown downgrade attacks in real time. Our experimental results indicated this detection method is both feasible and practical. When trained with normal TLS and SSL data, our classifier was able to detect which protocol was being used with 99.995% accuracy. After the scope of the hardware event recording was enlarged, the accuracy diminished greatly, but to 53.244%. Upon removal of TLS 1.1 from the data set, the accuracy returned to 99.905%.