NASA Technical Reports Server (NTRS)
Bryant, N. A.; Zobrist, A. L.; Walker, R. E.; Gokhman, B.
1985-01-01
Performance requirements regarding geometric accuracy have been defined in terms of end product goals, but until recently no precise details have been given concerning the conditions under which that accuracy is to be achieved. In order to achieve higher spatial and spectral resolutions, the Thematic Mapper (TM) sensor was designed to image in both forward and reverse mirror sweeps in two separate focal planes. Both hardware and software have been augmented and changed during the course of the Landsat TM developments to achieve improved geometric accuracy. An investigation has been conducted to determine if the TM meets the National Map Accuracy Standards for geometric accuracy at larger scales. It was found that TM imagery, in terms of geometry, has come close to, and in some cases exceeded, its stringent specifications.
Weiss, M R; Horn, T S
1990-09-01
The relationship between perceptions of competence and control, achievement, and motivated behavior in youth sport has been a topic of considerable interest. The purpose of this study was to examine whether children who are under-, accurate, or overestimators of their physical competence differ in their achievement characteristics. Children (N = 133), 8 to 13 years of age, who were attending a summer sport program, completed a series of questionnaires designed to assess perceptions of competence and control, motivational orientation, and competitive trait anxiety. Measures of physical competence were obtained by teachers' ratings that paralleled the children's measure of perceived competence. Perceived competence and teachers' ratings were standardized by grade level, and an accuracy score was computed from the difference between these scores. Children were then categorized as underestimators, accurate raters, or overestimators according to upper and lower quartiles of this distribution. A 2 x 2 x 3 (age level by gender by accuracy) MANCOVA revealed a significant gender by accuracy interaction. Underestimating girls were lower in challenge motivation, higher in trait anxiety, and more external in their control perceptions than accurate or overestimators. Underestimating boys were higher in perceived unknown control than accurate and overestimating boys. It was concluded that children who seriously underestimate their perceived competence may be likely candidates for discontinuation of sport activities or low levels of physical achievement.
NASA Astrophysics Data System (ADS)
Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.
2018-04-01
The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.
3D Higher Order Modeling in the BEM/FEM Hybrid Formulation
NASA Technical Reports Server (NTRS)
Fink, P. W.; Wilton, D. R.
2000-01-01
Higher order divergence- and curl-conforming bases have been shown to provide significant benefits, in both convergence rate and accuracy, in the 2D hybrid finite element/boundary element formulation (P. Fink and D. Wilton, National Radio Science Meeting, Boulder, CO, Jan. 2000). A critical issue in achieving the potential for accuracy of the approach is the accurate evaluation of all matrix elements. These involve products of high order polynomials and, in some instances, singular Green's functions. In the 2D formulation, the use of a generalized Gaussian quadrature method was found to greatly facilitate the computation and to improve the accuracy of the boundary integral equation self-terms. In this paper, a 3D, hybrid electric field formulation employing higher order bases and higher order elements is presented. The improvements in convergence rate and accuracy, compared to those resulting from lower order modeling, are established. Techniques developed to facilitate the computation of the boundary integral self-terms are also shown to improve the accuracy of these terms. Finally, simple preconditioning techniques are used in conjunction with iterative solution procedures to solve the resulting linear system efficiently. In order to handle the boundary integral singularities in the 3D formulation, the parent element- either a triangle or rectangle-is subdivided into a set of sub-triangles with a common vertex at the singularity. The contribution to the integral from each of the sub-triangles is computed using the Duffy transformation to remove the singularity. This method is shown to greatly facilitate t'pe self-term computation when the bases are of higher order. In addition, the sub-triangles can be further divided to achieve near arbitrary accuracy in the self-term computation. An efficient method for subdividing the parent element is presented. The accuracy obtained using higher order bases is compared to that obtained using lower order bases when the number of unknowns is approximately equal. Also, convergence rates obtained using higher order bases are compared to those obtained with lower order bases for selected sample
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
NASA Astrophysics Data System (ADS)
Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao
2017-03-01
Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.
Study of wavelet packet energy entropy for emotion classification in speech and glottal signals
NASA Astrophysics Data System (ADS)
He, Ling; Lech, Margaret; Zhang, Jing; Ren, Xiaomei; Deng, Lihua
2013-07-01
The automatic speech emotion recognition has important applications in human-machine communication. Majority of current research in this area is focused on finding optimal feature parameters. In recent studies, several glottal features were examined as potential cues for emotion differentiation. In this study, a new type of feature parameter is proposed, which calculates energy entropy on values within selected Wavelet Packet frequency bands. The modeling and classification tasks are conducted using the classical GMM algorithm. The experiments use two data sets: the Speech Under Simulated Emotion (SUSE) data set annotated with three different emotions (angry, neutral and soft) and Berlin Emotional Speech (BES) database annotated with seven different emotions (angry, bored, disgust, fear, happy, sad and neutral). The average classification accuracy achieved for the SUSE data (74%-76%) is significantly higher than the accuracy achieved for the BES data (51%-54%). In both cases, the accuracy was significantly higher than the respective random guessing levels (33% for SUSE and 14.3% for BES).
Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows
NASA Technical Reports Server (NTRS)
Wilson, Robert V.; Demuren, Ayodeji O.; Carpenter, Mark
1998-01-01
A higher order accurate numerical procedure has been developed for solving incompressible Navier-Stokes equations for 2D or 3D fluid flow problems. It is based on low-storage Runge-Kutta schemes for temporal discretization and fourth and sixth order compact finite-difference schemes for spatial discretization. The particular difficulty of satisfying the divergence-free velocity field required in incompressible fluid flow is resolved by solving a Poisson equation for pressure. It is demonstrated that for consistent global accuracy, it is necessary to employ the same order of accuracy in the discretization of the Poisson equation. Special care is also required to achieve the formal temporal accuracy of the Runge-Kutta schemes. The accuracy of the present procedure is demonstrated by application to several pertinent benchmark problems.
Good Practices for Learning to Recognize Actions Using FV and VLAD.
Wu, Jianxin; Zhang, Yu; Lin, Weiyao
2016-12-01
High dimensional representations such as Fisher vectors (FV) and vectors of locally aggregated descriptors (VLAD) have shown state-of-the-art accuracy for action recognition in videos. The high dimensionality, on the other hand, also causes computational difficulties when scaling up to large-scale video data. This paper makes three lines of contributions to learning to recognize actions using high dimensional representations. First, we reviewed several existing techniques that improve upon FV or VLAD in image classification, and performed extensive empirical evaluations to assess their applicability for action recognition. Our analyses of these empirical results show that normality and bimodality are essential to achieve high accuracy. Second, we proposed a new pooling strategy for VLAD and three simple, efficient, and effective transformations for both FV and VLAD. Both proposed methods have shown higher accuracy than the original FV/VLAD method in extensive evaluations. Third, we proposed and evaluated new feature selection and compression methods for the FV and VLAD representations. This strategy uses only 4% of the storage of the original representation, but achieves comparable or even higher accuracy. Based on these contributions, we recommend a set of good practices for action recognition in videos for practitioners in this field.
Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias
2017-10-01
Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.
The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.
Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E
2009-11-01
Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.
Tao, Jianmin; Rappe, Andrew M.
2016-01-20
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
NASA Astrophysics Data System (ADS)
Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan
2017-10-01
Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.
Huang, Yuansheng; Yang, Zhirong; Wang, Jing; Zhuo, Lin; Li, Zhixia; Zhan, Siyan
2016-05-06
To compare the performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from The Cochrane Library. Databases of CDSR and DARE in the Cochrane Library were searched for systematic reviews of diagnostic test accuracy published between 2008 and 2012 through nine search strategies. Each strategy consists of one group or combination of groups of searching filters about diagnostic test accuracy. Four groups of diagnostic filters were used. The Strategy combing all the filters was used as the reference to determine the sensitivity, precision, and the sensitivity x precision product for another eight Strategies. The reference Strategy retrieved 8029 records, of which 832 were eligible. The strategy only composed of MeSH terms about "accuracy measures" achieved the highest values in both precision (69.71%) and product (52.45%) with a moderate sensitivity (75.24%). The combination of MeSH terms and free text words about "accuracy measures" contributed little to increasing the sensitivity. Strategies composed of filters about "diagnosis" had similar sensitivity but lower precision and product to those composed of filters about "accuracy measures". MeSH term "exp'diagnosis' " achieved the lowest precision (9.78%) and product (7.91%), while its hyponym retrieved only half the number of records at the expense of missing 53 target articles. The precision was negatively correlated with sensitivities among the nine strategies. Compared to the filters about "diagnosis", the filters about "accuracy measures" achieved similar sensitivities but higher precision. When combining both terms, sensitivity of the strategy was enhanced obviously. The combination of MeSH terms and free text words about the same concept seemed to be meaningless for enhancing sensitivity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Exploration of Force Myography and surface Electromyography in hand gesture classification.
Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo
2017-03-01
Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-01-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective—behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user’s distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user’s authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets. PMID:28573214
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication.
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-10-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective-behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user's distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user's authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets.
Accurate Reading with Sequential Presentation of Single Letters
Price, Nicholas S. C.; Edwards, Gemma L.
2012-01-01
Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Measurement of diffusion coefficients from solution rates of bubbles
NASA Technical Reports Server (NTRS)
Krieger, I. M.
1979-01-01
The rate of solution of a stationary bubble is limited by the diffusion of dissolved gas molecules away from the bubble surface. Diffusion coefficients computed from measured rates of solution give mean values higher than accepted literature values, with standard errors as high as 10% for a single observation. Better accuracy is achieved with sparingly soluble gases, small bubbles, and highly viscous liquids. Accuracy correlates with the Grashof number, indicating that free convection is the major source of error. Accuracy should, therefore, be greatly increased in a gravity-free environment. The fact that the bubble will need no support is an additional important advantage of Spacelab for this measurement.
Blob-level active-passive data fusion for Benthic classification
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Kalluri, Hemanth; Mathur, Abhinav; Ramnath, Vinod; Kim, Minsu; Aitken, Jennifer; Tuell, Grady
2012-06-01
We extend the data fusion pixel level to the more semantically meaningful blob level, using the mean-shift algorithm to form labeled blobs having high similarity in the feature domain, and connectivity in the spatial domain. We have also developed Bhattacharyya Distance (BD) and rule-based classifiers, and have implemented these higher-level data fusion algorithms into the CZMIL Data Processing System. Applying these new algorithms to recent SHOALS and CASI data at Plymouth Harbor, Massachusetts, we achieved improved benthic classification accuracies over those produced with either single sensor, or pixel-level fusion strategies. These results appear to validate the hypothesis that classification accuracy may be generally improved by adopting higher spatial and semantic levels of fusion.
A Data-Driven Approach for Daily Real-Time Estimates and Forecasts of Near-Surface Soil Moisture
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Reichle, Rolf H.; Mahanama, Sarith P. P.
2017-01-01
NASAs Soil Moisture Active Passive (SMAP) mission provides global surface soil moisture retrievals with a revisit time of 2-3 days and a latency of 24 hours. Here, to enhance the utility of the SMAP data, we present an approach for improving real-time soil moisture estimates (nowcasts) and for forecasting soil moisture several days into the future. The approach, which involves using an estimate of loss processes (evaporation and drainage) and precipitation to evolve the most recent SMAP retrieval forward in time, is evaluated against subsequent SMAP retrievals themselves. The nowcast accuracy over the continental United States (CONUS) is shown to be markedly higher than that achieved with the simple yet common persistence approach. The accuracy of soil moisture forecasts, which rely on precipitation forecasts rather than on precipitation measurements, is reduced relative to nowcast accuracy but is still significantly higher than that obtained through persistence.
User Experience and Heritage Preservation
ERIC Educational Resources Information Center
Orfield, Steven J.; Chapman, J. Wesley; Davis, Nathan
2011-01-01
In considering the heritage preservation of higher education campus buildings, much of the attention gravitates toward issues of selection, cost, accuracy, and value, but the model for most preservation projects does not have a clear method of achieving the best solutions for meeting these targets. Instead, it simply relies on the design team and…
Privacy Protection by Matrix Transformation
NASA Astrophysics Data System (ADS)
Yang, Weijia
Privacy preserving is indispensable in data mining. In this paper, we present a novel clustering method for distributed multi-party data sets using orthogonal transformation and data randomization techniques. Our method can not only protect privacy in face of collusion, but also achieve a higher level of accuracy compared to the existing methods.
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry
NASA Astrophysics Data System (ADS)
Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki
2015-08-01
In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.
Martial arts striking hand peak acceleration, accuracy and consistency.
Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A
2013-01-01
The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
Results of the Australian geodetic VLBI experiment
NASA Technical Reports Server (NTRS)
Harvey, B. R.; Stolz, A.; Jauncey, D. L.; Niell, A.; Morabito, D. D.; Preston, R.
1983-01-01
The 250-2500 km baseline vectors between radio telescopes located at Tidbinbilla (DSS43) near Canberra, Parkes, Fleurs (X3) near Sydney, Hobart and Alice Springs were determined from radio interferometric observations of extragalactic sources. The observations were made during two 24-hour sessions on 26 April and 3 May 1982, and one 12-hour night-time session on 28 April 1982. The 275 km Tidbinbilla - Parkes baseline was measured with an accuracy of plus or minus 6 cm. The remaining baselines were measured with accuracies ranging from 15 cm to 6 m. The higher accuracies were achieved for the better instrumented sites of Tidbinbilla, Parkes and Fleurs. The data reduction technique and results of the experiment are discussed.
Strategy for the absolute neutron emission measurement on ITER.
Sasao, M; Bertalot, L; Ishikawa, M; Popovichev, S
2010-10-01
Accuracy of 10% is demanded to the absolute fusion measurement on ITER. To achieve this accuracy, a functional combination of several types of neutron measurement subsystem, cross calibration among them, and in situ calibration are needed. Neutron transport calculation shows the suitable calibration source is a DT/DD neutron generator of source strength higher than 10(10) n/s (neutron/second) for DT and 10(8) n/s for DD. It will take eight weeks at the minimum with this source to calibrate flux monitors, profile monitors, and the activation system.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation.
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-04-05
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN.
Performance Evaluation and Analysis for Gravity Matching Aided Navigation
Wu, Lin; Wang, Hubiao; Chai, Hua; Zhang, Lu; Hsu, Houtse; Wang, Yong
2017-01-01
Simulation tests were accomplished in this paper to evaluate the performance of gravity matching aided navigation (GMAN). Four essential factors were focused in this study to quantitatively evaluate the performance: gravity database (DB) resolution, fitting degree of gravity measurements, number of samples in matching, and gravity changes in the matching area. Marine gravity anomaly DB derived from satellite altimetry was employed. Actual dynamic gravimetry accuracy and operating conditions were referenced to design the simulation parameters. The results verified that the improvement of DB resolution, gravimetry accuracy, number of measurement samples, or gravity changes in the matching area generally led to higher positioning accuracies, while the effects of them were different and interrelated. Moreover, three typical positioning accuracy targets of GMAN were proposed, and the conditions to achieve these targets were concluded based on the analysis of several different system requirements. Finally, various approaches were provided to improve the positioning accuracy of GMAN. PMID:28379178
Porto, William F.; Pires, Állan S.; Franco, Octavio L.
2012-01-01
The antimicrobial peptides (AMP) have been proposed as an alternative to control resistant pathogens. However, due to multifunctional properties of several AMP classes, until now there has been no way to perform efficient AMP identification, except through in vitro and in vivo tests. Nevertheless, an indication of activity can be provided by prediction methods. In order to contribute to the AMP prediction field, the CS-AMPPred (Cysteine-Stabilized Antimicrobial Peptides Predictor) is presented here, consisting of an updated version of the Support Vector Machine (SVM) model for antimicrobial activity prediction in cysteine-stabilized peptides. The CS-AMPPred is based on five sequence descriptors: indexes of (i) α-helix and (ii) loop formation; and averages of (iii) net charge, (iv) hydrophobicity and (v) flexibility. CS-AMPPred was based on 310 cysteine-stabilized AMPs and 310 sequences extracted from PDB. The polynomial kernel achieves the best accuracy on 5-fold cross validation (85.81%), while the radial and linear kernels achieve 84.19%. Testing in a blind data set, the polynomial and radial kernels achieve an accuracy of 90.00%, while the linear model achieves 89.33%. The three models reach higher accuracies than previously described methods. A standalone version of CS-AMPPred is available for download at
Increased genomic prediction accuracy in wheat breeding using a large Australian panel.
Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn
2017-12-01
Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Banchariya, Anjali; Rao, Atmakuri Ramakrishna
2017-03-24
Insecticide resistance is a major challenge for the control program of insect pests in the fields of crop protection, human and animal health etc. Resistance to different insecticides is conferred by the proteins encoded from certain class of genes of the insects. To distinguish the insecticide resistant proteins from non-resistant proteins, no computational tool is available till date. Thus, development of such a computational tool will be helpful in predicting the insecticide resistant proteins, which can be targeted for developing appropriate insecticides. Five different sets of feature viz., amino acid composition (AAC), di-peptide composition (DPC), pseudo amino acid composition (PAAC), composition-transition-distribution (CTD) and auto-correlation function (ACF) were used to map the protein sequences into numeric feature vectors. The encoded numeric vectors were then used as input in support vector machine (SVM) for classification of insecticide resistant and non-resistant proteins. Higher accuracies were obtained under RBF kernel than that of other kernels. Further, accuracies were observed to be higher for DPC feature set as compared to others. The proposed approach achieved an overall accuracy of >90% in discriminating resistant from non-resistant proteins. Further, the two classes of resistant proteins i.e., detoxification-based and target-based were discriminated from non-resistant proteins with >95% accuracy. Besides, >95% accuracy was also observed for discrimination of proteins involved in detoxification- and target-based resistance mechanisms. The proposed approach not only outperformed Blastp, PSI-Blast and Delta-Blast algorithms, but also achieved >92% accuracy while assessed using an independent dataset of 75 insecticide resistant proteins. This paper presents the first computational approach for discriminating the insecticide resistant proteins from non-resistant proteins. Based on the proposed approach, an online prediction server DIRProt has also been developed for computational prediction of insecticide resistant proteins, which is accessible at http://cabgrid.res.in:8080/dirprot/ . The proposed approach is believed to supplement the efforts needed to develop dynamic insecticides in wet-lab by targeting the insecticide resistant proteins.
Indoor Pedestrian Localization Using iBeacon and Improved Kalman Filter.
Sung, Kwangjae; Lee, Dong Kyu 'Roy'; Kim, Hwangnam
2018-05-26
The reliable and accurate indoor pedestrian positioning is one of the biggest challenges for location-based systems and applications. Most pedestrian positioning systems have drift error and large bias due to low-cost inertial sensors and random motions of human being, as well as unpredictable and time-varying radio-frequency (RF) signals used for position determination. To solve this problem, many indoor positioning approaches that integrate the user's motion estimated by dead reckoning (DR) method and the location data obtained by RSS fingerprinting through Bayesian filter, such as the Kalman filter (KF), unscented Kalman filter (UKF), and particle filter (PF), have recently been proposed to achieve higher positioning accuracy in indoor environments. Among Bayesian filtering methods, PF is the most popular integrating approach and can provide the best localization performance. However, since PF uses a large number of particles for the high performance, it can lead to considerable computational cost. This paper presents an indoor positioning system implemented on a smartphone, which uses simple dead reckoning (DR), RSS fingerprinting using iBeacon and machine learning scheme, and improved KF. The core of the system is the enhanced KF called a sigma-point Kalman particle filter (SKPF), which localize the user leveraging both the unscented transform of UKF and the weighting method of PF. The SKPF algorithm proposed in this study is used to provide the enhanced positioning accuracy by fusing positional data obtained from both DR and fingerprinting with uncertainty. The SKPF algorithm can achieve better positioning accuracy than KF and UKF and comparable performance compared to PF, and it can provide higher computational efficiency compared with PF. iBeacon in our positioning system is used for energy-efficient localization and RSS fingerprinting. We aim to design the localization scheme that can realize the high positioning accuracy, computational efficiency, and energy efficiency through the SKPF and iBeacon indoors. Empirical experiments in real environments show that the use of the SKPF algorithm and iBeacon in our indoor localization scheme can achieve very satisfactory performance in terms of localization accuracy, computational cost, and energy efficiency.
Classifying four-category visual objects using multiple ERP components in single-trial ERP.
Qin, Yu; Zhan, Yu; Wang, Changming; Zhang, Jiacai; Yao, Li; Guo, Xiaojuan; Wu, Xia; Hu, Bin
2016-08-01
Object categorization using single-trial electroencephalography (EEG) data measured while participants view images has been studied intensively. In previous studies, multiple event-related potential (ERP) components (e.g., P1, N1, P2, and P3) were used to improve the performance of object categorization of visual stimuli. In this study, we introduce a novel method that uses multiple-kernel support vector machine to fuse multiple ERP component features. We investigate whether fusing the potential complementary information of different ERP components (e.g., P1, N1, P2a, and P2b) can improve the performance of four-category visual object classification in single-trial EEGs. We also compare the classification accuracy of different ERP component fusion methods. Our experimental results indicate that the classification accuracy increases through multiple ERP fusion. Additional comparative analyses indicate that the multiple-kernel fusion method can achieve a mean classification accuracy higher than 72 %, which is substantially better than that achieved with any single ERP component feature (55.07 % for the best single ERP component, N1). We compare the classification results with those of other fusion methods and determine that the accuracy of the multiple-kernel fusion method is 5.47, 4.06, and 16.90 % higher than those of feature concatenation, feature extraction, and decision fusion, respectively. Our study shows that our multiple-kernel fusion method outperforms other fusion methods and thus provides a means to improve the classification performance of single-trial ERPs in brain-computer interface research.
Neurocognitive and Behavioral Predictors of Math Performance in Children with and without ADHD
Antonini, Tanya N.; O’Brien, Kathleen M.; Narad, Megan E.; Langberg, Joshua M.; Tamm, Leanne; Epstein, Jeff N.
2014-01-01
Objective: This study examined neurocognitive and behavioral predictors of math performance in children with and without attention-deficit/hyperactivity disorder (ADHD). Method: Neurocognitive and behavioral variables were examined as predictors of 1) standardized mathematics achievement scores,2) productivity on an analog math task, and 3) accuracy on an analog math task. Results: Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the Attentional Network Task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Conclusion: Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. PMID:24071774
Neurocognitive and Behavioral Predictors of Math Performance in Children With and Without ADHD.
Antonini, Tanya N; Kingery, Kathleen M; Narad, Megan E; Langberg, Joshua M; Tamm, Leanne; Epstein, Jeffery N
2016-02-01
This study examined neurocognitive and behavioral predictors of math performance in children with and without ADHD. Neurocognitive and behavioral variables were examined as predictors of (a) standardized mathematics achievement scores, (b) productivity on an analog math task, and (c) accuracy on an analog math task. Children with ADHD had lower achievement scores but did not significantly differ from controls on math productivity or accuracy. N-back accuracy and parent-rated attention predicted math achievement. N-back accuracy and observed attention predicted math productivity. Alerting scores on the attentional network task predicted math accuracy. Mediation analyses indicated that n-back accuracy significantly mediated the relationship between diagnostic group and math achievement. Neurocognition, rather than behavior, may account for the deficits in math achievement exhibited by many children with ADHD. © The Author(s) 2013.
Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics
Belo, David; Gamboa, Hugo
2017-01-01
The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components. PMID:28831239
Meher, Prabina K.; Sahu, Tanmaya K.; Gahoi, Shachi; Rao, Atmakuri R.
2018-01-01
Heat shock proteins (HSPs) play a pivotal role in cell growth and variability. Since conventional approaches are expensive and voluminous protein sequence information is available in the post-genomic era, development of an automated and accurate computational tool is highly desirable for prediction of HSPs, their families and sub-types. Thus, we propose a computational approach for reliable prediction of all these components in a single framework and with higher accuracy as well. The proposed approach achieved an overall accuracy of ~84% in predicting HSPs, ~97% in predicting six different families of HSPs, and ~94% in predicting four types of DnaJ proteins, with bench mark datasets. The developed approach also achieved higher accuracy as compared to most of the existing approaches. For easy prediction of HSPs by experimental scientists, a user friendly web server ir-HSP is made freely accessible at http://cabgrid.res.in:8080/ir-hsp. The ir-HSP was further evaluated for proteome-wide identification of HSPs by using proteome datasets of eight different species, and ~50% of the predicted HSPs in each species were found to be annotated with InterPro HSP families/domains. Thus, the developed computational method is expected to supplement the currently available approaches for prediction of HSPs, to the extent of their families and sub-types. PMID:29379521
De novo peptide sequencing by deep learning
Tran, Ngoc Hieu; Zhang, Xianglilan; Xin, Lei; Shan, Baozhen; Li, Ming
2017-01-01
De novo peptide sequencing from tandem MS data is the key technology in proteomics for the characterization of proteins, especially for new sequences, such as mAbs. In this study, we propose a deep neural network model, DeepNovo, for de novo peptide sequencing. DeepNovo architecture combines recent advances in convolutional neural networks and recurrent neural networks to learn features of tandem mass spectra, fragment ions, and sequence patterns of peptides. The networks are further integrated with local dynamic programming to solve the complex optimization task of de novo sequencing. We evaluated the method on a wide variety of species and found that DeepNovo considerably outperformed state of the art methods, achieving 7.7–22.9% higher accuracy at the amino acid level and 38.1–64.0% higher accuracy at the peptide level. We further used DeepNovo to automatically reconstruct the complete sequences of antibody light and heavy chains of mouse, achieving 97.5–100% coverage and 97.2–99.5% accuracy, without assisting databases. Moreover, DeepNovo is retrainable to adapt to any sources of data and provides a complete end-to-end training and prediction solution to the de novo sequencing problem. Not only does our study extend the deep learning revolution to a new field, but it also shows an innovative approach in solving optimization problems by using deep learning and dynamic programming. PMID:28720701
Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1991-01-01
A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.
A MUSIC-based method for SSVEP signal processing.
Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei
2016-03-01
The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.
Matías-Guiu, Jordi A; Valles-Salgado, María; Rognoni, Teresa; Hamre-Gil, Frank; Moreno-Ramos, Teresa; Matías-Guiu, Jorge
2017-01-01
Our aim was to evaluate and compare the diagnostic properties of 5 screening tests for the diagnosis of mild Alzheimer disease (AD). We conducted a prospective and cross-sectional study of 92 patients with mild AD and of 68 healthy controls from our Department of Neurology. The diagnostic properties of the following tests were compared: Mini-Mental State Examination (MMSE), Addenbrooke's Cognitive Examination III (ACE-III), Memory Impairment Screen (MIS), Montreal Cognitive Assessment (MoCA), and Rowland Universal Dementia Assessment Scale (RUDAS). All tests yielded high diagnostic accuracy, with the ACE-III achieving the best diagnostic properties. The area under the curve was 0.897 for the ACE-III, 0.889 for the RUDAS, 0.874 for the MMSE, 0.866 for the MIS, and 0.856 for the MoCA. The Mini-ACE score from the ACE-III showed the highest diagnostic capacity (area under the curve 0.939). Memory scores of the ACE-III and of the RUDAS showed a better diagnostic accuracy than those of the MMSE and of the MoCA. All tests, especially the ACE-III, conveyed a higher diagnostic accuracy in patients with full primary education than in the less educated group. Implementing normative data improved the diagnostic accuracy of the ACE-III but not that of the other tests. The ACE-III achieved the highest diagnostic accuracy. This better discrimination was more evident in the more educated group. © 2017 S. Karger AG, Basel.
Training set extension for SVM ensemble in P300-speller with familiar face paradigm.
Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou
2018-03-27
P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.
Xu, Y.; Xia, J.; Miller, R.D.
2007-01-01
The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.
Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems.
Gao, Lei; Bourke, A K; Nelson, John
2014-06-01
Physical activity has a positive impact on people's well-being and it had been shown to decrease the occurrence of chronic diseases in the older adult population. To date, a substantial amount of research studies exist, which focus on activity recognition using inertial sensors. Many of these studies adopt a single sensor approach and focus on proposing novel features combined with complex classifiers to improve the overall recognition accuracy. In addition, the implementation of the advanced feature extraction algorithms and the complex classifiers exceed the computing ability of most current wearable sensor platforms. This paper proposes a method to adopt multiple sensors on distributed body locations to overcome this problem. The objective of the proposed system is to achieve higher recognition accuracy with "light-weight" signal processing algorithms, which run on a distributed computing based sensor system comprised of computationally efficient nodes. For analysing and evaluating the multi-sensor system, eight subjects were recruited to perform eight normal scripted activities in different life scenarios, each repeated three times. Thus a total of 192 activities were recorded resulting in 864 separate annotated activity states. The methods for designing such a multi-sensor system required consideration of the following: signal pre-processing algorithms, sampling rate, feature selection and classifier selection. Each has been investigated and the most appropriate approach is selected to achieve a trade-off between recognition accuracy and computing execution time. A comparison of six different systems, which employ single or multiple sensors, is presented. The experimental results illustrate that the proposed multi-sensor system can achieve an overall recognition accuracy of 96.4% by adopting the mean and variance features, using the Decision Tree classifier. The results demonstrate that elaborate classifiers and feature sets are not required to achieve high recognition accuracies on a multi-sensor system. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Correa, Katharina; Bangera, Rama; Figueroa, René; Lhorente, Jean P; Yáñez, José M
2017-01-31
Sea lice infestations caused by Caligus rogercresseyi are a main concern to the salmon farming industry due to associated economic losses. Resistance to this parasite was shown to have low to moderate genetic variation and its genetic architecture was suggested to be polygenic. The aim of this study was to compare accuracies of breeding value predictions obtained with pedigree-based best linear unbiased prediction (P-BLUP) methodology against different genomic prediction approaches: genomic BLUP (G-BLUP), Bayesian Lasso, and Bayes C. To achieve this, 2404 individuals from 118 families were measured for C. rogercresseyi count after a challenge and genotyped using 37 K single nucleotide polymorphisms. Accuracies were assessed using fivefold cross-validation and SNP densities of 0.5, 1, 5, 10, 25 and 37 K. Accuracy of genomic predictions increased with increasing SNP density and was higher than pedigree-based BLUP predictions by up to 22%. Both Bayesian and G-BLUP methods can predict breeding values with higher accuracies than pedigree-based BLUP, however, G-BLUP may be the preferred method because of reduced computation time and ease of implementation. A relatively low marker density (i.e. 10 K) is sufficient for maximal increase in accuracy when using G-BLUP or Bayesian methods for genomic prediction of C. rogercresseyi resistance in Atlantic salmon.
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
New more accurate calculations of the ground state potential energy surface of H(3) (+).
Pavanello, Michele; Tung, Wei-Cheng; Leonarski, Filip; Adamowicz, Ludwik
2009-02-21
Explicitly correlated Gaussian functions with floating centers have been employed to recalculate the ground state potential energy surface (PES) of the H(3) (+) ion with much higher accuracy than it was done before. The nonlinear parameters of the Gaussians (i.e., the exponents and the centers) have been variationally optimized with a procedure employing the analytical gradient of the energy with respect to these parameters. The basis sets for calculating new PES points were guessed from the points already calculated. This allowed us to considerably speed up the calculations and achieve very high accuracy of the results.
Lemieux, Sébastien
2006-08-25
The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.
NASA Astrophysics Data System (ADS)
Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding
2018-04-01
The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.
Parallelism measurement for base plate of standard artifact with multiple tactile approaches
NASA Astrophysics Data System (ADS)
Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie
2018-01-01
Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.
NASA Astrophysics Data System (ADS)
Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang
2016-10-01
In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.
Cognitive accuracy and intelligent executive function in the brain and in business.
Bailey, Charles E
2007-11-01
This article reviews research on cognition, language, organizational culture, brain, behavior, and evolution to posit the value of operating with a stable reference point based on cognitive accuracy and a rational bias. Drawing on rational-emotive behavioral science, social neuroscience, and cognitive organizational science on the one hand and a general model of brain and frontal lobe executive function on the other, I suggest implications for organizational success. Cognitive thought processes depend on specific brain structures functioning as effectively as possible under conditions of cognitive accuracy. However, typical cognitive processes in hierarchical business structures promote the adoption and application of subjective organizational beliefs and, thus, cognitive inaccuracies. Applying informed frontal lobe executive functioning to cognition, emotion, and organizational behavior helps minimize the negative effects of indiscriminate application of personal and cultural belief systems to business. Doing so enhances cognitive accuracy and improves communication and cooperation. Organizations operating with cognitive accuracy will tend to respond more nimbly to market pressures and achieve an overall higher level of performance and employee satisfaction.
Singha, Mrinal; Wu, Bingfang; Zhang, Miao
2016-01-01
Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification. PMID:28025525
Singha, Mrinal; Wu, Bingfang; Zhang, Miao
2016-12-22
Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification.
Wang, Xueyi; Davidson, Nicholas J.
2011-01-01
Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162
Exploring Mouse Protein Function via Multiple Approaches.
Huang, Guohua; Chu, Chen; Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning; Cai, Yu-Dong
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality.
Exploring Mouse Protein Function via Multiple Approaches
Huang, Tao; Kong, Xiangyin; Zhang, Yunhua; Zhang, Ning
2016-01-01
Although the number of available protein sequences is growing exponentially, functional protein annotations lag far behind. Therefore, accurate identification of protein functions remains one of the major challenges in molecular biology. In this study, we presented a novel approach to predict mouse protein functions. The approach was a sequential combination of a similarity-based approach, an interaction-based approach and a pseudo amino acid composition-based approach. The method achieved an accuracy of about 0.8450 for the 1st-order predictions in the leave-one-out and ten-fold cross-validations. For the results yielded by the leave-one-out cross-validation, although the similarity-based approach alone achieved an accuracy of 0.8756, it was unable to predict the functions of proteins with no homologues. Comparatively, the pseudo amino acid composition-based approach alone reached an accuracy of 0.6786. Although the accuracy was lower than that of the previous approach, it could predict the functions of almost all proteins, even proteins with no homologues. Therefore, the combined method balanced the advantages and disadvantages of both approaches to achieve efficient performance. Furthermore, the results yielded by the ten-fold cross-validation indicate that the combined method is still effective and stable when there are no close homologs are available. However, the accuracy of the predicted functions can only be determined according to known protein functions based on current knowledge. Many protein functions remain unknown. By exploring the functions of proteins for which the 1st-order predicted functions are wrong but the 2nd-order predicted functions are correct, the 1st-order wrongly predicted functions were shown to be closely associated with the genes encoding the proteins. The so-called wrongly predicted functions could also potentially be correct upon future experimental verification. Therefore, the accuracy of the presented method may be much higher in reality. PMID:27846315
A neural network approach to cloud classification
NASA Technical Reports Server (NTRS)
Lee, Jonathan; Weger, Ronald C.; Sengupta, Sailes K.; Welch, Ronald M.
1990-01-01
It is shown that, using high-spatial-resolution data, very high cloud classification accuracies can be obtained with a neural network approach. A texture-based neural network classifier using only single-channel visible Landsat MSS imagery achieves an overall cloud identification accuracy of 93 percent. Cirrus can be distinguished from boundary layer cloudiness with an accuracy of 96 percent, without the use of an infrared channel. Stratocumulus is retrieved with an accuracy of 92 percent, cumulus at 90 percent. The use of the neural network does not improve cirrus classification accuracy. Rather, its main effect is in the improved separation between stratocumulus and cumulus cloudiness. While most cloud classification algorithms rely on linear parametric schemes, the present study is based on a nonlinear, nonparametric four-layer neural network approach. A three-layer neural network architecture, the nonparametric K-nearest neighbor approach, and the linear stepwise discriminant analysis procedure are compared. A significant finding is that significantly higher accuracies are attained with the nonparametric approaches using only 20 percent of the database as training data, compared to 67 percent of the database in the linear approach.
Colloff, Melissa F.; Karoğlu, Nilda; Zelek, Katarzyna; Ryder, Hannah; Humphries, Joyce E.; Takarangi, Melanie K.T.
2017-01-01
Summary Acute alcohol intoxication during encoding can impair subsequent identification accuracy, but results across studies have been inconsistent, with studies often finding no effect. Little is also known about how alcohol intoxication affects the identification confidence–accuracy relationship. We randomly assigned women (N = 153) to consume alcohol (dosed to achieve a 0.08% blood alcohol content) or tonic water, controlling for alcohol expectancy. Women then participated in an interactive hypothetical sexual assault scenario and, 24 hours or 7 days later, attempted to identify the assailant from a perpetrator present or a perpetrator absent simultaneous line‐up and reported their decision confidence. Overall, levels of identification accuracy were similar across the alcohol and tonic water groups. However, women who had consumed tonic water as opposed to alcohol identified the assailant with higher confidence on average. Further, calibration analyses suggested that confidence is predictive of accuracy regardless of alcohol consumption. The theoretical and applied implications of our results are discussed.© 2017 The Authors Applied Cognitive Psychology Published by John Wiley & Sons Ltd. PMID:28781426
On what it means to know someone: a matter of pragmatics.
Gill, Michael J; Swann, William B
2004-03-01
Two studies provide support for W. B. Swann's (1984) argument that perceivers achieve substantial pragmatic accuracy--accuracy that facilitates the achievement of relationship-specific interaction goals--in their social relationships. Study 1 assessed the extent to which group members reached consensus regarding the behavior of a member in familiar (as compared with unfamiliar) contexts and found that groups do indeed achieve this form of pragmatic accuracy. Study 2 assessed the degree of insight romantic partners had into the self-views of their partners on relationship-relevant (as compared with less relevant) traits and found that couples do indeed achieve this form of pragmatic accuracy. Furthermore, pragmatic accuracy was uniquely associated with relationship harmony. Implications for a functional approach to person perception are discussed.
Increasing the number of single nucleotide polymorphisms used in genomic evaluations of dairy cattle
USDA-ARS?s Scientific Manuscript database
A small increase in the accuracy of genomic evaluations of dairy cattle was achieved by increasing the number of SNP used to 61,013. All the 45,195 SNP used previously were retained, and 15,818 SNP were selected from higher density genotyping chips if the magnitude of the SNP effect was among the to...
Liew, Jeffrey; Chen, Qi; Hughes, Jan N.
2009-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children. PMID:20161421
Liew, Jeffrey; Chen, Qi; Hughes, Jan N
2010-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
Motor Inhibition Affects the Speed But Not Accuracy of Aimed Limb Movements in an Insect
Calas-List, Delphine; Clare, Anthony J.; Komissarova, Alexandra; Nielsen, Thomas A.
2014-01-01
When reaching toward a target, human subjects use slower movements to achieve higher accuracy, and this can be accompanied by increased limb impedance (stiffness, viscosity) that stabilizes movements against motor noise and external perturbation. In arthropods, the activity of common inhibitory motor neurons influences limb impedance, so we hypothesized that this might provide a mechanism for speed and accuracy control of aimed movements in insects. We recorded simultaneously from excitatory leg motor neurons and from an identified common inhibitory motor neuron (CI1) in locusts that performed natural aimed scratching movements. We related limb movement kinematics to recorded motor activity and demonstrate that imposed alterations in the activity of CI1 influenced these kinematics. We manipulated the activity of CI1 by injecting depolarizing or hyperpolarizing current or killing the cell using laser photoablation. Naturally higher levels of inhibitory activity accompanied faster movements. Experimentally biasing the firing rate downward, or stopping firing completely, led to slower movements mediated by changes at several joints of the limb. Despite this, we found no effect on overall movement accuracy. We conclude that inhibitory modulation of joint stiffness has effects across most of the working range of the insect limb, with a pronounced effect on the overall velocity of natural movements independent of their accuracy. Passive joint forces that are greatest at extreme joint angles may enhance accuracy and are not affected by motor inhibition. PMID:24872556
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
Tense Marking in the English Narrative Retells of Dual Language Preschoolers.
Gusewski, Svenja; Rojas, Raúl
2017-07-26
This longitudinal study investigated the emergence of English tense marking in young (Spanish-English) dual language learners (DLLs) over 4 consecutive academic semesters, addressing the need for longitudinal data on typical acquisition trajectories of English in DLL preschoolers. Language sample analysis was conducted on 139 English narrative retells elicited from 39 preschool-age (Spanish-English) DLLs (range = 39-65 months). Growth curve models captured within- and between-individual change in tense-marking accuracy over time. Tense-marking accuracy was indexed by the finite verb morphology composite and by 2 specifically developed adaptations. Individual tense markers were systematically described in terms of overall accuracy and specific error patterns. Tense-marking accuracy exhibited significant growth over time for each composite. Initially, irregular past-tense accuracy was higher than regular past-tense accuracy; over time, however, regular past-tense marking outpaced accuracy on irregular verbs. These findings suggest that young DLLs can achieve high tense-marking accuracy assuming 2 years of immersive exposure to English. Monitoring the growth in tense-marking accuracy over time and considering productive tense-marking errors as partially correct more precisely captured the emergence of English tense marking in this population with highly variable expressive language skills. https://doi.org/10.23641/asha.5176942.
Entropy-based link prediction in weighted networks
NASA Astrophysics Data System (ADS)
Xu, Zhongqi; Pu, Cunlai; Ramiz Sharafat, Rajput; Li, Lunbo; Yang, Jian
2017-01-01
Information entropy has been proved to be an effective tool to quantify the structural importance of complex networks. In the previous work (Xu et al, 2016 \\cite{xu2016}), we measure the contribution of a path in link prediction with information entropy. In this paper, we further quantify the contribution of a path with both path entropy and path weight, and propose a weighted prediction index based on the contributions of paths, namely Weighted Path Entropy (WPE), to improve the prediction accuracy in weighted networks. Empirical experiments on six weighted real-world networks show that WPE achieves higher prediction accuracy than three typical weighted indices.
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
Prospective memory mediated by interoceptive accuracy: a psychophysiological approach.
Umeda, Satoshi; Tochizawa, Saiko; Shibata, Midori; Terasawa, Yuri
2016-11-19
Previous studies on prospective memory (PM), defined as memory for future intentions, suggest that psychological stress enhances successful PM retrieval. However, the mechanisms underlying this notion remain poorly understood. We hypothesized that PM retrieval is achieved through interaction with autonomic nervous activity, which is mediated by the individual accuracy of interoceptive awareness, as measured by the heartbeat detection task. In this study, the relationship between cardiac reactivity and retrieval of delayed intentions was evaluated using the event-based PM task. Participants were required to detect PM target letters while engaged in an ongoing 2-back working memory task. The results demonstrated that individuals with higher PM task performance had a greater increase in heart rate on PM target presentation. Also, higher interoceptive perceivers showed better PM task performance. This pattern was not observed for working memory task performance. These findings suggest that cardiac afferent signals enhance PM retrieval, which is mediated by individual levels of interoceptive accuracy.This article is part of the themed issue 'Interoception beyond homeostasis: affect, cognition and mental health'. © 2016 The Authors.
Chen, Hu; Yang, Xu; Chen, Litong; Wang, Yong; Sun, Yuchun
2016-01-01
The objective was to establish and evaluate a method for manufacture of custom trays for edentulous jaws using computer aided design and fused deposition modeling (FDM) technologies. A digital method for design the custom trays for edentulous jaws was established. The tissue surface data of ten standard mandibular edentulous plaster models, which was used to design the digital custom tray in a reverse engineering software, were obtained using a 3D scanner. The designed tray was printed by a 3D FDM printing device. Another ten hand-made custom trays were produced as control. The 3-dimentional surface data of models and custom trays was scanned to evaluate the accuracy of reserved impression space, while the difference between digitally made trays and hand-made trays were analyzed. The digitally made custom trays achieved a good matching with the mandibular model, showing higher accuracy than the hand-made ones. There was no significant difference of the reserved space between different models and its matched digitally made trays. With 3D scanning, CAD and FDM technology, an efficient method of custom tray production was established, which achieved a high reproducibility and accuracy. PMID:26763620
NASA Astrophysics Data System (ADS)
Keller, P. E.; Gmitro, A. F.
1993-07-01
A prototype neutral network system of multifaceted, planar interconnection holograms and opto-electronic neurons is analyzed. This analysis shows that a hologram fabricated with electron-beam lithography has the capacity to connect 6700 neuron outputs to 6700 neuron inputs, and that, the encoded synaptic weights have a precision of approximately 5 bits. Higher interconnection densities can be achieved by accepting a lower synaptic weight accuracy. For systems employing laser diodes at the outputs of the neurons, processing rates in the range of 45 to 720 trillion connections per second can potentially be achieved.
ERIC Educational Resources Information Center
Caldwell, Stacy Lynette
2010-01-01
Students served in juvenile correctional school settings often arrive with histories of trauma, aversive educational experiences, low achievement, and other severe risk factors that impeded psychosocial development, educational progress, and occupational outcomes. Schools serving adjudicated youth must address a higher percentage of severe…
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1982-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.
Application of Skylab EREP data for land use management
NASA Technical Reports Server (NTRS)
Simonett, D. S. (Principal Investigator)
1976-01-01
The author has identified the following significant results. The 1.09-1.19 micron band proved to be very valuable for discriminating a variety of land use categories, including agriculture, forest, and urban classes. The 1.55-1.75 micron band proved very useful in combination with the 1.09-1.19 micron band. Misregistration between spectral bands, even by as little as 1/2 pixel, may degrade classification accuracy. Identification accuracy of boundary or border pixels was as much as 13% lower than the accuracy for identifying internal field pixels. The principal conclusion with respect to the S190B camera system is that the higher resolution of the S190B system in comparison to previous space photography (Gemini, Apollo), to the S190A system (Skylab), and to LANDSAT imagery significantly increases the range of additional discrimination achievable.
Slunyaev, A; Pelinovsky, E; Sergeeva, A; Chabchoub, A; Hoffmann, N; Onorato, M; Akhmediev, N
2013-07-01
The rogue wave solutions (rational multibreathers) of the nonlinear Schrödinger equation (NLS) are tested in numerical simulations of weakly nonlinear and fully nonlinear hydrodynamic equations. Only the lowest order solutions from 1 to 5 are considered. A higher accuracy of wave propagation in space is reached using the modified NLS equation, also known as the Dysthe equation. This numerical modeling allowed us to directly compare simulations with recent results of laboratory measurements in Chabchoub et al. [Phys. Rev. E 86, 056601 (2012)]. In order to achieve even higher physical accuracy, we employed fully nonlinear simulations of potential Euler equations. These simulations provided us with basic characteristics of long time evolution of rational solutions of the NLS equation in the case of near-breaking conditions. The analytic NLS solutions are found to describe the actual wave dynamics of steep waves reasonably well.
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.
Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R
2009-07-01
The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/
A novel redundant INS based on triple rotary inertial measurement units
NASA Astrophysics Data System (ADS)
Chen, Gang; Li, Kui; Wang, Wei; Li, Peng
2016-10-01
Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h-1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h-1, which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS.
Yu, Hwanjo; Kim, Taehoon; Oh, Jinoh; Ko, Ilhwan; Kim, Sungchul; Han, Wook-Shin
2010-04-16
Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.
Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS
2010-01-01
Background Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user’s feedback and efficiently processes the function to return relevant articles in real time. PMID:20406504
Adequacy of Using a Three-Item Questionnaire to Determine Zygosity in Chinese Young Twins.
Ho, Connie Suk-Han; Zheng, Mo; Chow, Bonnie Wing-Yin; Wong, Simpson W L; Lim, Cadmon K P; Waye, Mary M Y
2017-03-01
The present study examined the adequacy of a three-item parent questionnaire in determining the zygosity of young Chinese twins and whether there was any association between parent response accuracy and some demographic variables. The sample consisted of 334 pairs of same-sex Chinese twins aged from 3 to 11 years. Three scoring methods, namely the summed score, logistic regression, and decision tree, were employed to evaluate parent response accuracy of twin zygosity based on single nucleotide polymorphism (SNP) information. The results showed that all three methods achieved high level of accuracy ranging from 91 to 93 % which was comparable to the accuracy rates in previous Chinese twin studies. Correlation results also showed that the higher the parents' education level or the family income was, the more likely parents were able to tell correctly that their twins are identical or fraternal. The present findings confirmed the validity of using a three-item parent questionnaire to determine twin zygosity in a Chinese school-aged twin sample.
NASA Astrophysics Data System (ADS)
Dou, P.
2017-12-01
Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
NASA Astrophysics Data System (ADS)
Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca
2016-10-01
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Zhou, Tao; Li, Zhaofu; Pan, Jianjun
2018-01-27
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.
Misawa, Masashi; Kudo, Shin-Ei; Mori, Yuichi; Takeda, Kenichi; Maeda, Yasuharu; Kataoka, Shinichi; Nakamura, Hiroki; Kudo, Toyoki; Wakamura, Kunihiko; Hayashi, Takemasa; Katagiri, Atsushi; Baba, Toshiyuki; Ishida, Fumio; Inoue, Haruhiro; Nimura, Yukitaka; Oda, Msahiro; Mori, Kensaku
2017-05-01
Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
How to select electrical end-use meters for proper measurement of DSM impact estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, M.
1994-12-31
Does metering actually provide higher accuracy impact estimates? The answer is sometimes yes, sometimes no. It depends on how the metered data will be used. DSM impact estimates can be achieved in a variety of ways, including engineering algorithms, modeling and statistical methods. Yet for all of these methods, impacts can be calculated as the difference in pre- and post-installation annual load shapes. Increasingly, end-use metering is being used to either adjust and calibrate a particular estimate method, or measure load shapes directly. It is therefore not surprising that metering has become synonymous with higher accuracy impact estimates. If meteredmore » data is used as a component in an estimating methodology, its relative contribution to accuracy can be analyzed through propagation of error or {open_quotes}POE{close_quotes} analysis. POE analysis is a framework which can be used to evaluate different metering options and their relative effects on cost and accuracy. If metered data is used to directly measure pre- and post-installation load shapes to calculate energy and demand impacts, then the accuracy of the whole metering process directly affects the accuracy of the impact estimate. This paper is devoted to the latter case, where the decision has been made to collect high-accuracy metered data of electrical energy and demand. The underlying assumption is that all meters can yield good results if applied within the scope of their limitations. The objective is to know the application, understand what meters are actually doing to measure and record power, and decide with confidence when a sophisticated meter is required, and when a less expensive type will suffice.« less
Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data
NASA Technical Reports Server (NTRS)
Larden, D. R.; Bender, P. L.
1983-01-01
The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605
External validation of scoring systems in risk stratification of upper gastrointestinal bleeding.
Anchu, Anna Cherian; Mohsina, Subair; Sureshkumar, Sathasivam; Mahalakshmy, T; Kate, Vikram
2017-03-01
The aim of this study was to externally validate the four commonly used scoring systems in the risk stratification of patients with upper gastrointestinal bleed (UGIB). Patients of UGIB who underwent endoscopy within 24 h of presentation were stratified prospectively using the pre-endoscopy Rockall score (PRS) >0, complete Rockall score (CRS) >2, Glasgow Blatchford bleeding scores (GBS) >3, and modified GBS (m-GBS) >3 scores. Patients were followed up to 30 days. Prognostic accuracy of the scores was done by comparing areas under curve (AUC) in terms of overall risk stratification, re-bleeding, mortality, need for intervention, and length of hospitalization. One hundred and seventy-five patients were studied. All four scores performed better in the overall risk stratification on AUC [PRS = 0.566 (CI: 0.481-0.651; p-0.043)/CRS = 0.712 (CI: 0.634-0.790); p<0.001)/GBS = 0.810 (CI: 0.744-0.877; p->0.001); m-GBS = 0.802 (CI: 0.734-0.871; p<0.001)], whereas only CRS achieved significance in identifying re-bleed [AUC-0.679 (CI: 0.579-0.780; p = 0.003)]. All the scoring systems except PRS were found to be significantly better in detecting 30-day mortality with a high AUC (CRS = 0.798; p-0.042)/GBS = 0.833; p-0.023); m-GBS = 0.816; p-0.031). All four scores demonstrated significant accuracy in the risk stratification of non-variceal patients; however, only GBS and m-GBS were significant in variceal etiology. Higher cutoff scores achieved better sensitivity/specificity [RS > 0 (50/60.8), CRS > 1 (87.5/50.6), GBS > 7 (88.5/63.3), m-GBS > 7(82.3/72.6)] in the risk stratification. GBS and m-GBS appear to be more valid in risk stratification of UGIB patients in this region. Higher cutoff values achieved better predictive accuracy.
Bending stiffness of catheters and guide wires.
Wünsche, P; Werner, C; Bloss, P
2002-01-01
An important property of catheters and guide wires to assess their pushability behavior is their bending stiffness. To measure bending stiffness, a new bending module with a new clamping device was developed. This module can easily be mounted in commercially available tensile testing equipment, where bending force and deflection due to the bending force can be measured. To achieve high accuracy for the bending stiffness, the bending distance has to be measured with even higher accuracy by using a laser-scan micrometer. Measurement results of angiographic catheters and guide wires were presented and discussed. The bending stiffness shows a significant dependence on the angle of the test specimen's rotation around its length axis.
Training Deep Spiking Neural Networks Using Backpropagation.
Lee, Jun Haeng; Delbruck, Tobi; Pfeiffer, Michael
2016-01-01
Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations.
Wearable technology and ECG processing for fall risk assessment, prevention and detection.
Melillo, Paolo; Castaldo, Rossana; Sannino, Giovanna; Orrico, Ada; de Pietro, Giuseppe; Pecchia, Leandro
2015-01-01
Falls represent one of the most common causes of injury-related morbidity and mortality in later life. Subjects with cardiovascular disorders (e.g., related to autonomic dysfunctions and postural hypotension) are at higher risk of falling. Autonomic dysfunctions increasing the risk of falling in the short and mid-term could be assessed by Heart Rate Variability (HRV) extracted by electrocardiograph (ECG). We developed three trials for assessing the usefulness of ECG monitoring using wearable devices for: risk assessment of falling in the next few weeks; prevention of imminent falls due to standing hypotension; and fall detection. Statistical and data-mining methods are adopted to develop classification and regression models, validated with the cross-validation approach. The first classifier based on HRV features enabled to identify future fallers among hypertensive patients with an accuracy of 72% (sensitivity: 51.1%, specificity: 80.2%). The regression model to predict falls due to orthostatic dropdown from HRV recorded before standing achieved an overall accuracy of 80% (sensitivity: 92%, specificity: 90%). Finally, the classifier to detect simulated falls using ECG achieved an accuracy of 77.3% (sensitivity: 81.8%, specificity: 72.7%). The evidence from these three studies showed that ECG monitoring and processing could achieve satisfactory performances compared to other system for risk assessment, fall prevention and detection. This is interesting as differently from other technologies actually employed to prevent falls, ECG is recommended for many other pathologies of later life and is more accepted by senior citizens.
Hu, Erzhong; Nosato, Hirokazu; Sakanashi, Hidenori; Murakawa, Masahiro
2013-01-01
Capsule endoscopy is a patient-friendly endoscopy broadly utilized in gastrointestinal examination. However, the efficacy of diagnosis is restricted by the large quantity of images. This paper presents a modified anomaly detection method, by which both known and unknown anomalies in capsule endoscopy images of small intestine are expected to be detected. To achieve this goal, this paper introduces feature extraction using a non-linear color conversion and Higher-order Local Auto Correlation (HLAC) Features, and makes use of image partition and subspace method for anomaly detection. Experiments are implemented among several major anomalies with combinations of proposed techniques. As the result, the proposed method achieved 91.7% and 100% detection accuracy for swelling and bleeding respectively, so that the effectiveness of proposed method is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, T; Sheu, R; Todorov, B
2014-06-15
Purpose: To evaluate initial setup accuracy for stereotactic radiosurgery (SRS) between Brainlab frame-based and frameless immobilization system, also to discern the magnitude frameless system has on setup parameters. Methods: The correction shifts from the original setup were compared for total 157 SRS cranial treatments (69 frame-based vs. 88 frameless). All treatments were performed on a Novalis linac with ExacTrac positioning system. Localization box with isocenter overlay was used for initial setup and correction shift was determined by ExacTrac 6D auto-fusion to achieve submillimeter accuracy for treatment. For frameless treatments, mean time interval between simulation and treatment was 5.7 days (rangemore » 0–13). Pearson Chi-Square was used for univariate analysis. Results: The correctional radial shifts (mean±STD, median) for the frame and frameless system measured by ExacTrac were 1.2±1.2mm, 1.1mm and 3.1±3.3mm, 2.0mm, respectively. Treatments with frameless system had a radial shift >2mm more often than those with frames (51.1% vs. 2.9%; p<.0001). To achieve submillimeter accuracy, 85.5% frame-based treatments did not require shift and only 23.9% frameless treatment could succeed with initial setup. There was no statistical significant system offset observed in any direction for either system. For frameless treatments, those treated ≥ 3 days from simulation had statistically higher rates of radial shifts between 1–2mm and >2mm compared to patients treated in a shorter amount of time from simulation (34.3% and 56.7% vs. 28.6% and 33.3%, respectively; p=0.006). Conclusion: Although image-guided positioning system can also achieve submillimeter accuracy for frameless system, users should be cautious regarding the inherent uncertainty of its capability of immobilization. A proper quality assurance procedure for frameless mask manufacturing and a protocol for intra-fraction imaging verification will be crucial for frameless system. Time interval between simulation and treatment was influential to initial setup accuracy. A shorter time frame for frameless SRS treatment could be helpful in minimizing uncertainties in localization.« less
Park, Jinhee; Javier, Rios Jesus; Moon, Taesup; Kim, Youngwook
2016-11-24
Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN) directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost.
NASA Astrophysics Data System (ADS)
Nemati, Maedeh; Shateri Najaf Abady, Ali Reza; Toghraie, Davood; Karimipour, Arash
2018-01-01
The incorporation of different equations of state into single-component multiphase lattice Boltzmann model is considered in this paper. The original pseudopotential model is first detailed, and several cubic equations of state, the Redlich-Kwong, Redlich-Kwong-Soave, and Peng-Robinson are then incorporated into the lattice Boltzmann model. A comparison of the numerical simulation achievements on the basis of density ratios and spurious currents is used for presentation of the details of phase separation in these non-ideal single-component systems. The paper demonstrates that the scheme for the inter-particle interaction force term as well as the force term incorporation method matters to achieve more accurate and stable results. The velocity shifting method is demonstrated as the force term incorporation method, among many, with accuracy and stability results. Kupershtokh scheme also makes it possible to achieve large density ratio (up to 104) and to reproduce the coexistence curve with high accuracy. Significant reduction of the spurious currents at vapor-liquid interface is another observation. High-density ratio and spurious current reduction resulted from the Redlich-Kwong-Soave and Peng-Robinson EOSs, in higher accordance with the Maxwell construction results.
Tu, Chengjian; Li, Jun; Sheng, Quanhu; Zhang, Ming; Qu, Jun
2014-04-04
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R(2) > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery.
2015-01-01
Survey-scan-based label-free method have shown no compelling benefit over fragment ion (MS2)-based approaches when low-resolution mass spectrometry (MS) was used, the growing prevalence of high-resolution analyzers may have changed the game. This necessitates an updated, comparative investigation of these approaches for data acquired by high-resolution MS. Here, we compared survey scan-based (ion current, IC) and MS2-based abundance features including spectral-count (SpC) and MS2 total-ion-current (MS2-TIC), for quantitative analysis using various high-resolution LC/MS data sets. Key discoveries include: (i) study with seven different biological data sets revealed only IC achieved high reproducibility for lower-abundance proteins; (ii) evaluation with 5-replicate analyses of a yeast sample showed IC provided much higher quantitative precision and lower missing data; (iii) IC, SpC, and MS2-TIC all showed good quantitative linearity (R2 > 0.99) over a >1000-fold concentration range; (iv) both MS2-TIC and IC showed good linear response to various protein loading amounts but not SpC; (v) quantification using a well-characterized CPTAC data set showed that IC exhibited markedly higher quantitative accuracy, higher sensitivity, and lower false-positives/false-negatives than both SpC and MS2-TIC. Therefore, IC achieved an overall superior performance than the MS2-based strategies in terms of reproducibility, missing data, quantitative dynamic range, quantitative accuracy, and biomarker discovery. PMID:24635752
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-06-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-01-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies. PMID:28401899
On-chip magnetically actuated robot with ultrasonic vibration for single cell manipulations.
Hagiwara, Masaya; Kawahara, Tomohiro; Yamanishi, Yoko; Masuda, Taisuke; Feng, Lin; Arai, Fumihito
2011-06-21
This paper presents an innovative driving method for an on-chip robot actuated by permanent magnets in a microfluidic chip. A piezoelectric ceramic is applied to induce ultrasonic vibration to the microfluidic chip and the high-frequency vibration reduces the effective friction on the MMT significantly. As a result, we achieved 1.1 micrometre positioning accuracy of the microrobot, which is 100 times higher accuracy than without vibration. The response speed is also improved and the microrobot can be actuated with a speed of 5.5 mm s(-1) in 3 degrees of freedom. The novelty of the ultrasonic vibration appears in the output force as well. Contrary to the reduction of friction on the microrobot, the output force increased twice as much by the ultrasonic vibration. Using this high accuracy, high speed, and high power microrobot, swine oocyte manipulations are presented in a microfluidic chip.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Jerome, Joseph; Osher, Stanley
1989-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
High-precision GNSS ocean positioning with BeiDou short-message communication
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhiteng; Zang, Nan; Wang, Siyao
2018-04-01
The current popular GNSS RTK technique would be not applicable on ocean due to the limited communication access for transmitting differential corrections. A new technique is proposed for high-precision ocean RTK, referred to as ORTK, where the corrections are transmitted by employing the function of BeiDou satellite short-message communication (SMC). To overcome the limitation of narrow bandwidth of BeiDou SMC, a new strategy of simplifying and encoding corrections is proposed instead of standard differential corrections, which reduces the single-epoch corrections from more than 1000 to less than 300 bytes. To solve the problems of correction delays, cycle slips, blunders and abnormal epochs over ultra-long baseline ORTK, a series of powerful algorithms were designed at the user-end software for achieving the stable and precise kinematic solutions on far ocean applications. The results from two long baselines of 240 and 420 km and real ocean experiments reveal that the kinematic solutions with horizontal accuracy of 5 cm and vertical accuracy of better than 15 cm are achievable by convergence time of 3-10 min. Compared to commercial ocean PPP with satellite telecommunication, ORTK is of much cheaper expense, higher accuracy and shorter convergence. It will be very prospective in many location-based ocean services.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
A Hybrid Brain-Computer Interface Based on the Fusion of P300 and SSVEP Scores.
Yin, Erwei; Zeyl, Timothy; Saab, Rami; Chau, Tom; Hu, Dewen; Zhou, Zongtan
2015-07-01
The present study proposes a hybrid brain-computer interface (BCI) with 64 selectable items based on the fusion of P300 and steady-state visually evoked potential (SSVEP) brain signals. With this approach, row/column (RC) P300 and two-step SSVEP paradigms were integrated to create two hybrid paradigms, which we denote as the double RC (DRC) and 4-D spellers. In each hybrid paradigm, the target is simultaneously detected based on both P300 and SSVEP potentials as measured by the electroencephalogram. We further proposed a maximum-probability estimation (MPE) fusion approach to combine the P300 and SSVEP on a score level and compared this approach to other approaches based on linear discriminant analysis, a naïve Bayes classifier, and support vector machines. The experimental results obtained from thirteen participants indicated that the 4-D hybrid paradigm outperformed the DRC paradigm and that the MPE fusion achieved higher accuracy compared with the other approaches. Importantly, 12 of the 13 participants, using the 4-D paradigm achieved an accuracy of over 90% and the average accuracy was 95.18%. These promising results suggest that the proposed hybrid BCI system could be used in the design of a high-performance BCI-based keyboard.
IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.
Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho
2016-02-05
Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Localization accuracy of sphere fiducials in computed tomography images
NASA Astrophysics Data System (ADS)
Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias
2014-03-01
In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.
1992-09-01
deformations in underground mines has been developed in Canada in cooperation with the Canada Centre for Mineral and Energy Technology ( CANMET ). The... technological developments in both geodetic and geotechnical instrumentation, at a cost one may achieve almost any, practically needed, instrumental...Due to the ever growing technological progress in all fields of engineering and, connected with it, the growing demand for higher accuracy, efficiency
NASA Astrophysics Data System (ADS)
Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi
2015-02-01
Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Comparison of Classification Methods for P300 Brain-Computer Interface on Disabled Subjects
Manyakov, Nikolay V.; Chumerin, Nikolay; Combaz, Adrien; Van Hulle, Marc M.
2011-01-01
We report on tests with a mind typing paradigm based on a P300 brain-computer interface (BCI) on a group of amyotrophic lateral sclerosis (ALS), middle cerebral artery (MCA) stroke, and subarachnoid hemorrhage (SAH) patients, suffering from motor and speech disabilities. We investigate the achieved typing accuracy given the individual patient's disorder, and how it correlates with the type of classifier used. We considered 7 types of classifiers, linear as well as nonlinear ones, and found that, overall, one type of linear classifier yielded a higher classification accuracy. In addition to the selection of the classifier, we also suggest and discuss a number of recommendations to be considered when building a P300-based typing system for disabled subjects. PMID:21941530
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
NASA Technical Reports Server (NTRS)
Fagan, Matthew E.; Defries, Ruth S.; Sesnie, Steven E.; Arroyo-Mora, J. Pablo; Soto, Carlomagno; Singh, Aditya; Townsend, Philip A.; Chazdon, Robin L.
2015-01-01
An efficient means to map tree plantations is needed to detect tropical land use change and evaluate reforestation projects. To analyze recent tree plantation expansion in northeastern Costa Rica, we examined the potential of combining moderate-resolution hyperspectral imagery (2005 HyMap mosaic) with multitemporal, multispectral data (Landsat) to accurately classify (1) general forest types and (2) tree plantations by species composition. Following a linear discriminant analysis to reduce data dimensionality, we compared four Random Forest classification models: hyperspectral data (HD) alone; HD plus interannual spectral metrics; HD plus a multitemporal forest regrowth classification; and all three models combined. The fourth, combined model achieved overall accuracy of 88.5%. Adding multitemporal data significantly improved classification accuracy (p less than 0.0001) of all forest types, although the effect on tree plantation accuracy was modest. The hyperspectral data alone classified six species of tree plantations with 75% to 93% producer's accuracy; adding multitemporal spectral data increased accuracy only for two species with dense canopies. Non-native tree species had higher classification accuracy overall and made up the majority of tree plantations in this landscape. Our results indicate that combining occasionally acquired hyperspectral data with widely available multitemporal satellite imagery enhances mapping and monitoring of reforestation in tropical landscapes.
Zhe, Shandian; Xu, Zenglin; Qi, Yuan; Yu, Peng
2014-01-01
A key step for Alzheimer's disease (AD) study is to identify associations between genetic variations and intermediate phenotypes (e.g., brain structures). At the same time, it is crucial to develop a noninvasive means for AD diagnosis. Although these two tasks-association discovery and disease diagnosis-have been treated separately by a variety of approaches, they are tightly coupled due to their common biological basis. We hypothesize that the two tasks can potentially benefit each other by a joint analysis, because (i) the association study discovers correlated biomarkers from different data sources, which may help improve diagnosis accuracy, and (ii) the disease status may help identify disease-sensitive associations between genetic variations and MRI features. Based on this hypothesis, we present a new sparse Bayesian approach for joint association study and disease diagnosis. In this approach, common latent features are extracted from different data sources based on sparse projection matrices and used to predict multiple disease severity levels based on Gaussian process ordinal regression; in return, the disease status is used to guide the discovery of relationships between the data sources. The sparse projection matrices not only reveal the associations but also select groups of biomarkers related to AD. To learn the model from data, we develop an efficient variational expectation maximization algorithm. Simulation results demonstrate that our approach achieves higher accuracy in both predicting ordinal labels and discovering associations between data sources than alternative methods. We apply our approach to an imaging genetics dataset of AD. Our joint analysis approach not only identifies meaningful and interesting associations between genetic variations, brain structures, and AD status, but also achieves significantly higher accuracy for predicting ordinal AD stages than the competing methods.
ERIC Educational Resources Information Center
Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.
2014-01-01
Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…
Wong, Puisan; Tsz-Tin Leung, Carrie
2018-05-17
Previous studies reported that children acquire Cantonese tones before 3 years of age, supporting the assumption in models of phonological development that suprasegmental features are acquired rapidly and early in children. Yet, recent research found a large disparity in the age of Cantonese tone acquisition. This study investigated Cantonese tone development in 4- to 6-year-old children. Forty-eight 4- to 6-year-old Cantonese-speaking children and 28 mothers of the children labeled 30 pictures representing familiar words in the 6 tones in a picture-naming task and identified pictures representing words in different Cantonese tones in a picture-pointing task. To control for lexical biases in tone assessment, tone productions were low-pass filtered to eliminate lexical information. Five judges categorized the tones in filtered stimuli. Tone production accuracy, tone perception accuracy, and correlation between tone production and perception accuracy were examined. Children did not start to produce adultlike tones until 5 and 6 years of age. Four-year-olds produced none of the tones with adultlike accuracy. Five- and 6-year-olds attained adultlike productions in 2 (T5 and T6) to 3 (T4, T5, and T6) tones, respectively. Children made better progress in tone perception and achieved higher accuracy in perception than in production. However, children in all age groups perceived none of the tones as accurately as adults, except that T1 was perceived with adultlike accuracy by 6-year-olds. Only weak association was found between children's tone perception and production accuracy. Contradicting to the long-held assumption that children acquire lexical tone rapidly and early before the mastery of segmentals, this study found that 4- to 6-year-old children have not mastered the perception or production of the full set of Cantonese tones in familiar monosyllabic words. Larger development was found in children's tone perception than tone production. The higher tone perception accuracy but weak correlation between tone perception and production abilities in children suggested that tone perception accuracy is not sufficient for children's tone production accuracy. The findings have clinical and theoretical implications.
Seebauer, Sebastian; Fleiß, Jürgen; Schweighart, Markus
2016-01-01
Studies on environmental behavior commonly assume single respondents to represent their entire household or employ proxy-reporting, where participants answer for other household members. It is contested whether these practices yield valid results. Therefore, we interviewed 84 couples, wherein both household members provided self- and proxy-reports for their partner. For use of electrical household appliances, consumption of hot water, space heating, everyday mobility, and environmental values, many variables fail to achieve criteria for validity. Consistency (agreement between self-reports of household members) is higher if behaviors are undertaken jointly or negotiated between partners. Accuracy (agreement of proxy-reports with corresponding self-reports) is higher for routine behaviors and for behaviors easily observable by the partner. Overall, indices perform better than items on single behaviors. We caution against employing individual responses in place of the entire household. Interventions for energy conservation should approach the specific person undertaking the target behavior. PMID:28670000
Classifying epileptic EEG signals with delay permutation entropy and Multi-Scale K-means.
Zhu, Guohun; Li, Yan; Wen, Peng Paul; Wang, Shuaifang
2015-01-01
Most epileptic EEG classification algorithms are supervised and require large training datasets, that hinder their use in real time applications. This chapter proposes an unsupervised Multi-Scale K-means (MSK-means) MSK-means algorithm to distinguish epileptic EEG signals and identify epileptic zones. The random initialization of the K-means algorithm can lead to wrong clusters. Based on the characteristics of EEGs, the MSK-means MSK-means algorithm initializes the coarse-scale centroid of a cluster with a suitable scale factor. In this chapter, the MSK-means algorithm is proved theoretically superior to the K-means algorithm on efficiency. In addition, three classifiers: the K-means, MSK-means MSK-means and support vector machine (SVM), are used to identify seizure and localize epileptogenic zone using delay permutation entropy features. The experimental results demonstrate that identifying seizure with the MSK-means algorithm and delay permutation entropy achieves 4. 7 % higher accuracy than that of K-means, and 0. 7 % higher accuracy than that of the SVM.
Wu, Jianfa; Peng, Dahao; Li, Zhuping; Zhao, Li; Ling, Huanzhang
2015-01-01
To effectively and accurately detect and classify network intrusion data, this paper introduces a general regression neural network (GRNN) based on the artificial immune algorithm with elitist strategies (AIAE). The elitist archive and elitist crossover were combined with the artificial immune algorithm (AIA) to produce the AIAE-GRNN algorithm, with the aim of improving its adaptivity and accuracy. In this paper, the mean square errors (MSEs) were considered the affinity function. The AIAE was used to optimize the smooth factors of the GRNN; then, the optimal smooth factor was solved and substituted into the trained GRNN. Thus, the intrusive data were classified. The paper selected a GRNN that was separately optimized using a genetic algorithm (GA), particle swarm optimization (PSO), and fuzzy C-mean clustering (FCM) to enable a comparison of these approaches. As shown in the results, the AIAE-GRNN achieves a higher classification accuracy than PSO-GRNN, but the running time of AIAE-GRNN is long, which was proved first. FCM and GA-GRNN were eliminated because of their deficiencies in terms of accuracy and convergence. To improve the running speed, the paper adopted principal component analysis (PCA) to reduce the dimensions of the intrusive data. With the reduction in dimensionality, the PCA-AIAE-GRNN decreases in accuracy less and has better convergence than the PCA-PSO-GRNN, and the running speed of the PCA-AIAE-GRNN was relatively improved. The experimental results show that the AIAE-GRNN has a higher robustness and accuracy than the other algorithms considered and can thus be used to classify the intrusive data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Ke; Li Yanqiu; Wang Hai
Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less
Zourmand, Alireza; Ting, Hua-Nong; Mirhassani, Seyed Mostafa
2013-03-01
Speech is one of the prevalent communication mediums for humans. Identifying the gender of a child speaker based on his/her speech is crucial in telecommunication and speech therapy. This article investigates the use of fundamental and formant frequencies from sustained vowel phonation to distinguish the gender of Malay children aged between 7 and 12 years. The Euclidean minimum distance and multilayer perceptron were used to classify the gender of 360 Malay children based on different combinations of fundamental and formant frequencies (F0, F1, F2, and F3). The Euclidean minimum distance with normalized frequency data achieved a classification accuracy of 79.44%, which was higher than that of the nonnormalized frequency data. Age-dependent modeling was used to improve the accuracy of gender classification. The Euclidean distance method obtained 84.17% based on the optimal classification accuracy for all age groups. The accuracy was further increased to 99.81% using multilayer perceptron based on mel-frequency cepstral coefficients. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
A structural SVM approach for reference parsing.
Zhang, Xiaoli; Zou, Jie; Le, Daniel X; Thoma, George R
2011-06-09
Automated extraction of bibliographic data, such as article titles, author names, abstracts, and references is essential to the affordable creation of large citation databases. References, typically appearing at the end of journal articles, can also provide valuable information for extracting other bibliographic data. Therefore, parsing individual reference to extract author, title, journal, year, etc. is sometimes a necessary preprocessing step in building citation-indexing systems. The regular structure in references enables us to consider reference parsing a sequence learning problem and to study structural Support Vector Machine (structural SVM), a newly developed structured learning algorithm on parsing references. In this study, we implemented structural SVM and used two types of contextual features to compare structural SVM with conventional SVM. Both methods achieve above 98% token classification accuracy and above 95% overall chunk-level accuracy for reference parsing. We also compared SVM and structural SVM to Conditional Random Field (CRF). The experimental results show that structural SVM and CRF achieve similar accuracies at token- and chunk-levels. When only basic observation features are used for each token, structural SVM achieves higher performance compared to SVM since it utilizes the contextual label features. However, when the contextual observation features from neighboring tokens are combined, SVM performance improves greatly, and is close to that of structural SVM after adding the second order contextual observation features. The comparison of these two methods with CRF using the same set of binary features show that both structural SVM and CRF perform better than SVM, indicating their stronger sequence learning ability in reference parsing.
Shcherbina, Anna; Mattsson, C. Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W.; Hastie, Trevor; Wheeler, Matthew T.; Ashley, Euan A.
2017-01-01
The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/). PMID:28538708
Shcherbina, Anna; Mattsson, C Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W; Hastie, Trevor; Wheeler, Matthew T; Ashley, Euan A
2017-05-24
The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/).
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Whitaker, D. L.; Slack, David C.; Walters, Robert W.
1990-01-01
The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.
A Handheld Open-Field Infant Keratometer (An American Ophthalmological Society Thesis)
Miller, Joseph M.
2010-01-01
Purpose: To design and evaluate a new infant keratometer that incorporates an unobstructed view of the infant with both eyes (open-field design). Methods: The design of the open-field infant keratometer is presented, and details of its construction are given. The design incorporates a single-ring keratoscope for measurement of corneal astigmatism over a 4-mm region of the cornea and includes a rectangular grid target concentric within the ring to allow for the study of higher-order aberrations of the eye. In order to calibrate the lens and imaging system, a novel telecentric test object was constructed and used. The system was bench calibrated against steel ball bearings of known dimensions and evaluated for accuracy while being used in handheld mode in a group of 16 adult cooperative subjects. It was then evaluated for testability in a group of 10 infants and toddlers. Results: Results indicate that while the device achieved the goal of creating an open-field instrument containing a single-ring keratoscope with a concentric grid array for the study of higher-order aberrations, additional work is required to establish better control of the vertex distance. Conclusion: The handheld open-field infant keratometer demonstrates testability suitable for the study of infant corneal astigmatism. Use of collimated light sources in future iterations of the design must be incorporated in order to achieve the accuracy required for clinical investigation. PMID:21212850
A handheld open-field infant keratometer (an american ophthalmological society thesis).
Miller, Joseph M
2010-12-01
To design and evaluate a new infant keratometer that incorporates an unobstructed view of the infant with both eyes (open-field design). The design of the open-field infant keratometer is presented, and details of its construction are given. The design incorporates a single-ring keratoscope for measurement of corneal astigmatism over a 4-mm region of the cornea and includes a rectangular grid target concentric within the ring to allow for the study of higher-order aberrations of the eye. In order to calibrate the lens and imaging system, a novel telecentric test object was constructed and used. The system was bench calibrated against steel ball bearings of known dimensions and evaluated for accuracy while being used in handheld mode in a group of 16 adult cooperative subjects. It was then evaluated for testability in a group of 10 infants and toddlers. Results indicate that while the device achieved the goal of creating an open-field instrument containing a single-ring keratoscope with a concentric grid array for the study of higher-order aberrations, additional work is required to establish better control of the vertex distance. The handheld open-field infant keratometer demonstrates testability suitable for the study of infant corneal astigmatism. Use of collimated light sources in future iterations of the design must be incorporated in order to achieve the accuracy required for clinical investigation.
Recent developments in heterodyne laser interferometry at Harbin Institute of Technology
NASA Astrophysics Data System (ADS)
Hu, P. C.; Tan, J. B. B.; Yang, H. X. X.; Fu, H. J. J.; Wang, Q.
2013-01-01
In order to fulfill the requirements for high-resolution and high-precision heterodyne interferometric technologies and instruments, the laser interferometry group of HIT has developed some novel techniques for high-resolution and high-precision heterodyne interferometers, such as high accuracy laser frequency stabilization, dynamic sub-nanometer resolution phase interpolation and dynamic nonlinearity measurement. Based on a novel lock point correction method and an asymmetric thermal structure, the frequency stabilized laser achieves a long term stability of 1.2×10-8, and it can be steadily stabilized even in the air flowing up to 1 m/s. In order to achieve dynamic sub-nanometer resolution of laser heterodyne interferometers, a novel phase interpolation method based on digital delay line is proposed. Experimental results show that, the proposed 0.62 nm, phase interpolator built with a 64 multiple PLL and an 8-tap digital delay line achieves a static accuracy better than 0.31nm and a dynamic accuracy better than 0.62 nm over the velocity ranging from -2 m/s to 2 m/s. Meanwhile, an accuracy beam polarization measuring setup is proposed to check and ensure the light's polarization state of the dual frequency laser head, and a dynamic optical nonlinearity measuring setup is built to measure the optical nonlinearity of the heterodyne system accurately and quickly. Analysis and experimental results show that, the beam polarization measuring setup can achieve an accuracy of 0.03° in ellipticity angles and an accuracy of 0.04° in the non-orthogonality angle respectively, and the optical nonlinearity measuring setup can achieve an accuracy of 0.13°.
800 C Silicon Carbide (SiC) Pressure Sensors for Engine Ground Testing
NASA Technical Reports Server (NTRS)
Okojie, Robert S.
2016-01-01
MEMS-based 4H-SiC piezoresistive pressure sensors have been demonstrated at 800 C, leading to the discovery of strain sensitivity recovery with increasing temperatures above 400 C, eventually achieving up to, or near, 100 recovery of the room temperature values at 800 C. This result will allow the insertion of highly sensitive pressure sensors closer to jet, rocket, and hypersonic engine combustion chambers to improve the quantification accuracy of combustor dynamics, performance, and increase safety margin. Also, by operating at higher temperature and locating closer to the combustion chamber, reduction of the length (weight) of pressure tubes that are currently used will be achieved. This will result in reduced costlb to access space.
Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output
NASA Astrophysics Data System (ADS)
Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu
2015-07-01
Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.
Pan, Jianjun
2018-01-01
This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073
Continuous decoding of human grasp kinematics using epidural and subdural signals
NASA Astrophysics Data System (ADS)
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-02-01
Objective. Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces. Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials (EFPs). Approach. We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with EFPs, with both standard- and high-resolution electrode arrays. Main results. In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean ± SD grasp aperture variance accounted for was 0.54 ± 0.05 across all subjects, 0.75 ± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7-20 Hz and 70-115 Hz spectral bands contained the most information about grasp kinematics, with the 70-115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance. To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface.
Continuous decoding of human grasp kinematics using epidural and subdural signals
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy
2018-03-31
In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tarasov, D. A.; Buevich, A. G.; Sergeev, A. P.; Shichkin, A. V.; Baglaeva, E. M.
2017-06-01
Forecasting the soil pollution is a considerable field of study in the light of the general concern of environmental protection issues. Due to the variation of content and spatial heterogeneity of pollutants distribution at urban areas, the conventional spatial interpolation models implemented in many GIS packages mostly cannot provide appreciate interpolation accuracy. Moreover, the problem of prediction the distribution of the element with high variability in the concentration at the study site is particularly difficult. The work presents two neural networks models forecasting a spatial content of the abnormally distributed soil pollutant (Cr) at a particular location of the subarctic Novy Urengoy, Russia. A method of generalized regression neural network (GRNN) was compared to a common multilayer perceptron (MLP) model. The proposed techniques have been built, implemented and tested using ArcGIS and MATLAB. To verify the models performances, 150 scattered input data points (pollutant concentrations) have been selected from 8.5 km2 area and then split into independent training data set (105 points) and validation data set (45 points). The training data set was generated for the interpolation using ordinary kriging while the validation data set was used to test their accuracies. The networks structures have been chosen during a computer simulation based on the minimization of the RMSE. The predictive accuracy of both models was confirmed to be significantly higher than those achieved by the geostatistical approach (kriging). It is shown that MLP could achieve better accuracy than both kriging and even GRNN for interpolating surfaces.
Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images
NASA Astrophysics Data System (ADS)
Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian
2015-12-01
With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.
Robust coordinated control of a dual-arm space robot
NASA Astrophysics Data System (ADS)
Shi, Lingling; Kayastha, Sharmila; Katupitiya, Jay
2017-09-01
Dual-arm space robots are more capable of implementing complex space tasks compared with single arm space robots. However, the dynamic coupling between the arms and the base will have a serious impact on the spacecraft attitude and the hand motion of each arm. Instead of considering one arm as the mission arm and the other as the balance arm, in this work two arms of the space robot perform as mission arms aimed at accomplishing secure capture of a floating target. The paper investigates coordinated control of the base's attitude and the arms' motion in the task space in the presence of system uncertainties. Two types of controllers, i.e. a Sliding Mode Controller (SMC) and a nonlinear Model Predictive Controller (MPC) are verified and compared with a conventional Computed-Torque Controller (CTC) through numerical simulations in terms of control accuracy and system robustness. Both controllers eliminate the need to linearly parameterize the dynamic equations. The MPC has been shown to achieve performance with higher accuracy than CTC and SMC in the absence of system uncertainties under the condition that they consume comparable energy. When the system uncertainties are included, SMC and CTC present advantageous robustness than MPC. Specifically, in a case where system inertia increases, SMC delivers higher accuracy than CTC and costs the least amount of energy.
Impact of confidence number on accuracy of the SureSight Vision Screener.
2010-02-01
To assess the relation between the confidence number provided by the Welch Allyn SureSight Vision Screener and screening accuracy, and to determine whether repeated testing to achieve a higher confidence number improves screening accuracy in pre-school children. Lay and nurse screeners screened 1452 children enrolled in the Vision in Preschoolers (VIP) Phase II Study. All children also underwent a comprehensive eye examination. By using statistical comparison of proportions, we examined sensitivity and specificity for detecting any ocular condition targeted for detection in the VIP study and conditions grouped by severity and by type (amblyopia, strabismus, significant refractive error, and unexplained decreased visual acuity) among children who had confidence numbers < or =4 (retest necessary), 5 (retest if possible), > or =6 (acceptable). Among the 687 (47.3%) children who had repeated testing by either lay or nurse screeners because of a low confidence number (<6) for one or both eyes in the initial testing, the same analyses were also conducted to compare results between the initial reading and repeated test reading with the highest confidence number in the same child. These analyses were based on the failure criteria associated with 90% specificity for detecting any VIP condition in VIP Phase II. A lower confidence number category were associated with higher sensitivity (0.71, 0.65, and 0.59 for < or =4, 5, and > or =6, respectively, p = 0.04) but no statistical difference in specificity (0.85, 0.85, and 0.91, p = 0.07) of detecting any VIP-targeted condition. Children with any VIP-targeted condition were as likely to be detected using the initial confidence number reading as using the higher confidence number reading from repeated testing. A higher confidence number obtained during screening with the SureSight Vision Screener is not associated with better screening accuracy. Repeated testing to reach the manufacturer's recommended minimum value is not helpful in pre-school vision screening.
Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair
NASA Astrophysics Data System (ADS)
Sasou, Akira; Kojima, Hiroaki
2009-12-01
Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.
SPHINX--an algorithm for taxonomic binning of metagenomic sequences.
Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S
2011-01-01
Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.
Competitive Deep-Belief Networks for Underwater Acoustic Target Recognition
Shen, Sheng; Yao, Xiaohui; Sheng, Meiping; Wang, Chen
2018-01-01
Underwater acoustic target recognition based on ship-radiated noise belongs to the small-sample-size recognition problems. A competitive deep-belief network is proposed to learn features with more discriminative information from labeled and unlabeled samples. The proposed model consists of four stages: (1) A standard restricted Boltzmann machine is pretrained using a large number of unlabeled data to initialize its parameters; (2) the hidden units are grouped according to categories, which provides an initial clustering model for competitive learning; (3) competitive training and back-propagation algorithms are used to update the parameters to accomplish the task of clustering; (4) by applying layer-wise training and supervised fine-tuning, a deep neural network is built to obtain features. Experimental results show that the proposed method can achieve classification accuracy of 90.89%, which is 8.95% higher than the accuracy obtained by the compared methods. In addition, the highest accuracy of our method is obtained with fewer features than other methods. PMID:29570642
Automatic Structural Parcellation of Mouse Brain MRI Using Multi-Atlas Label Fusion
Ma, Da; Cardoso, Manuel J.; Modat, Marc; Powell, Nick; Wells, Jack; Holmes, Holly; Wiseman, Frances; Tybulewicz, Victor; Fisher, Elizabeth; Lythgoe, Mark F.; Ourselin, Sébastien
2014-01-01
Multi-atlas segmentation propagation has evolved quickly in recent years, becoming a state-of-the-art methodology for automatic parcellation of structural images. However, few studies have applied these methods to preclinical research. In this study, we present a fully automatic framework for mouse brain MRI structural parcellation using multi-atlas segmentation propagation. The framework adopts the similarity and truth estimation for propagated segmentations (STEPS) algorithm, which utilises a locally normalised cross correlation similarity metric for atlas selection and an extended simultaneous truth and performance level estimation (STAPLE) framework for multi-label fusion. The segmentation accuracy of the multi-atlas framework was evaluated using publicly available mouse brain atlas databases with pre-segmented manually labelled anatomical structures as the gold standard, and optimised parameters were obtained for the STEPS algorithm in the label fusion to achieve the best segmentation accuracy. We showed that our multi-atlas framework resulted in significantly higher segmentation accuracy compared to single-atlas based segmentation, as well as to the original STAPLE framework. PMID:24475148
Cuff-less blood pressure measurement using pulse arrival time and a Kalman filter
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Chen, Xianxiang; Fang, Zhen; Xue, Yongjiao; Zhan, Qingyuan; Yang, Ting; Xia, Shanhong
2017-02-01
The present study designs an algorithm to increase the accuracy of continuous blood pressure (BP) estimation. Pulse arrival time (PAT) has been widely used for continuous BP estimation. However, because of motion artifact and physiological activities, PAT-based methods are often troubled with low BP estimation accuracy. This paper used a signal quality modified Kalman filter to track blood pressure changes. A Kalman filter guarantees that BP estimation value is optimal in the sense of minimizing the mean square error. We propose a joint signal quality indice to adjust the measurement noise covariance, pushing the Kalman filter to weigh more heavily on measurements from cleaner data. Twenty 2 h physiological data segments selected from the MIMIC II database were used to evaluate the performance. Compared with straightforward use of the PAT-based linear regression model, the proposed model achieved higher measurement accuracy. Due to low computation complexity, the proposed algorithm can be easily transplanted into wearable sensor devices.
Mapping of sea ice and measurement of its drift using aircraft synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Leberl, F.; Bryan, M. L.; Elachi, C.; Farr, T.; Campbell, W.
1979-01-01
Side-looking radar images of Arctic sea ice were obtained as part of the Arctic Ice Dynamics Joint Experiment. Repetitive coverages of a test site in the Arctic were used to measure sea ice drift, employing single images and blocks of overlapping radar image strips; the images were used in conjunction with data from the aircraft inertial navigation and altimeter. Also, independently measured, accurate positions of a number of ground control points were available. Initial tests of the method were carried out with repeated coverages of a land area on the Alaska coast (Prudhoe). Absolute accuracies achieved were essentially limited by the accuracy of the inertial navigation data. Errors of drift measurements were found to be about + or - 2.5 km. Relative accuracy is higher; its limits are set by the radar image geometry and the definition of identical features in sequential images. The drift of adjacent ice features with respect to one another could be determined with errors of less than + or - 0.2 km.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Osher, Stanley; Jerome, Joseph
1991-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially nonoscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
Performance and stability of mask process correction for EBM-7000
NASA Astrophysics Data System (ADS)
Saito, Yasuko; Chen, George; Wang, Jen-Shiang; Bai, Shufeng; Howell, Rafael; Li, Jiangwei; Tao, Jun; VanDenBroeke, Doug; Wiley, Jim; Takigawa, Tadahiro; Ohnishi, Takayuki; Kamikubo, Takashi; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi
2010-05-01
In order to support complex optical masks today and EUV masks in the near future, it is critical to correct mask patterning errors with a magnitude of up to 20nm over a range of 2000nm at mask scale caused by short range mask process proximity effects. A new mask process correction technology, MPC+, has been developed to achieve the target requirements for the next generation node. In this paper, the accuracy and throughput performance of MPC+ technology is evaluated using the most advanced mask writing tool, the EBM-70001), and high quality mask metrology . The accuracy of MPC+ is achieved by using a new comprehensive mask model. The results of through-pitch and through-linewidth linearity curves and error statistics for multiple pattern layouts (including both 1D and 2D patterns) are demonstrated and show post-correction accuracy of 2.34nm 3σ for through-pitch/through-linewidth linearity. Implementing faster mask model simulation and more efficient correction recipes; full mask area (100cm2) processing run time is less than 7 hours for 32nm half-pitch technology node. From these results, it can be concluded that MPC+ with its higher precision and speed is a practical technology for the 32nm node and future technology generations, including EUV, when used with advance mask writing processes like the EBM-7000.
Study on validation method for femur finite element model under multiple loading conditions
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu
2018-03-01
Acquisition of accurate and reliable constitutive parameters related to bio-tissue materials was beneficial to improve biological fidelity of a Finite Element (FE) model and predict impact damages more effectively. In this paper, a femur FE model was established under multiple loading conditions with diverse impact positions. Then, based on sequential response surface method and genetic algorithms, the material parameters identification was transformed to a multi-response optimization problem. Finally, the simulation results successfully coincided with force-displacement curves obtained by numerous experiments. Thus, computational accuracy and efficiency of the entire inverse calculation process were enhanced. This method was able to effectively reduce the computation time in the inverse process of material parameters. Meanwhile, the material parameters obtained by the proposed method achieved higher accuracy.
Leverage effect, economic policy uncertainty and realized volatility with regime switching
NASA Astrophysics Data System (ADS)
Duan, Yinying; Chen, Wang; Zeng, Qing; Liu, Zhicao
2018-03-01
In this study, we first investigate the impacts of leverage effect and economic policy uncertainty (EPU) on future volatility in the framework of regime switching. Out-of-sample results show that the HAR-RV including the leverage effect and economic policy uncertainty with regimes can achieve higher forecast accuracy than RV-type and GARCH-class models. Our robustness results further imply that these factors in the framework of regime switching can substantially improve the HAR-RV's forecast performance.
NASA Astrophysics Data System (ADS)
Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans
The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.
Dyrba, Martin; Barkhof, Frederik; Fellgiebel, Andreas; Filippi, Massimo; Hausner, Lucrezia; Hauenstein, Karlheinz; Kirste, Thomas; Teipel, Stefan J
2015-01-01
Alzheimer's disease (AD) patients show early changes in white matter (WM) structural integrity. We studied the use of diffusion tensor imaging (DTI) in assessing WM alterations in the predementia stage of mild cognitive impairment (MCI). We applied a Support Vector Machine (SVM) classifier to DTI and volumetric magnetic resonance imaging data from 35 amyloid-β42 negative MCI subjects (MCI-Aβ42-), 35 positive MCI subjects (MCI-Aβ42+), and 25 healthy controls (HC) retrieved from the European DTI Study on Dementia. The SVM was applied to DTI-derived fractional anisotropy, mean diffusivity (MD), and mode of anisotropy (MO) maps. For comparison, we studied classification based on gray matter (GM) and WM volume. We obtained accuracies of up to 68% for MO and 63% for GM volume when it came to distinguishing between MCI-Aβ42- and MCI-Aβ42+. When it came to separating MCI-Aβ42+ from HC we achieved an accuracy of up to 77% for MD and a significantly lower accuracy of 68% for GM volume. The accuracy of multimodal classification was not higher than the accuracy of the best single modality. Our results suggest that DTI data provide better prediction accuracy than GM volume in predementia AD. Copyright © 2015 by the American Society of Neuroimaging.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Comparison of three optical tracking systems in a complex navigation scenario.
Rudolph, Tobias; Ebert, Lars; Kowal, Jens
2010-01-01
Three-dimensional rotational X-ray imaging with the SIREMOBIL Iso-C3D (Siemens AG, Medical Solutions, Erlangen, Germany) has become a well-established intra-operative imaging modality. In combination with a tracking system, the Iso-C3D provides inherently registered image volumes ready for direct navigation. This is achieved by means of a pre-calibration procedure. The aim of this study was to investigate the influence of the tracking system used on the overall navigation accuracy of direct Iso-C3D navigation. Three models of tracking system were used in the study: Two Optotrak 3020s, a Polaris P4 and a Polaris Spectra system, with both Polaris systems being in the passive operation mode. The evaluation was carried out at two different sites using two Iso-C3D devices. To measure the navigation accuracy, a number of phantom experiments were conducted using an acrylic phantom equipped with titanium spheres. After scanning, a special pointer was used to pinpoint these markers. The difference between the digitized and navigated positions served as the accuracy measure. Up to 20 phantom scans were performed for each tracking system. The average accuracy measured was 0.86 mm and 0.96 mm for the two Optotrak 3020 systems, 1.15 mm for the Polaris P4, and 1.04 mm for the Polaris Spectra system. For the Polaris systems a higher maximal error was found, but all three systems yielded similar minimal errors. On average, all tracking systems used in this study could deliver similar navigation accuracy. The passive Polaris system showed – as expected – higher maximal errors; however, depending on the application constraints, this might be negligible.
Wu, Jianfa; Peng, Dahao; Li, Zhuping; Zhao, Li; Ling, Huanzhang
2015-01-01
To effectively and accurately detect and classify network intrusion data, this paper introduces a general regression neural network (GRNN) based on the artificial immune algorithm with elitist strategies (AIAE). The elitist archive and elitist crossover were combined with the artificial immune algorithm (AIA) to produce the AIAE-GRNN algorithm, with the aim of improving its adaptivity and accuracy. In this paper, the mean square errors (MSEs) were considered the affinity function. The AIAE was used to optimize the smooth factors of the GRNN; then, the optimal smooth factor was solved and substituted into the trained GRNN. Thus, the intrusive data were classified. The paper selected a GRNN that was separately optimized using a genetic algorithm (GA), particle swarm optimization (PSO), and fuzzy C-mean clustering (FCM) to enable a comparison of these approaches. As shown in the results, the AIAE-GRNN achieves a higher classification accuracy than PSO-GRNN, but the running time of AIAE-GRNN is long, which was proved first. FCM and GA-GRNN were eliminated because of their deficiencies in terms of accuracy and convergence. To improve the running speed, the paper adopted principal component analysis (PCA) to reduce the dimensions of the intrusive data. With the reduction in dimensionality, the PCA-AIAE-GRNN decreases in accuracy less and has better convergence than the PCA-PSO-GRNN, and the running speed of the PCA-AIAE-GRNN was relatively improved. The experimental results show that the AIAE-GRNN has a higher robustness and accuracy than the other algorithms considered and can thus be used to classify the intrusive data. PMID:25807466
Forest tree species discrimination in western Himalaya using EO-1 Hyperion
NASA Astrophysics Data System (ADS)
George, Rajee; Padalia, Hitendra; Kushwaha, S. P. S.
2014-05-01
The information acquired in the narrow bands of hyperspectral remote sensing data has potential to capture plant species spectral variability, thereby improving forest tree species mapping. This study assessed the utility of spaceborne EO-1 Hyperion data in discrimination and classification of broadleaved evergreen and conifer forest tree species in western Himalaya. The pre-processing of 242 bands of Hyperion data resulted into 160 noise-free and vertical stripe corrected reflectance bands. Of these, 29 bands were selected through step-wise exclusion of bands (Wilk's Lambda). Spectral Angle Mapper (SAM) and Support Vector Machine (SVM) algorithms were applied to the selected bands to assess their effectiveness in classification. SVM was also applied to broadband data (Landsat TM) to compare the variation in classification accuracy. All commonly occurring six gregarious tree species, viz., white oak, brown oak, chir pine, blue pine, cedar and fir in western Himalaya could be effectively discriminated. SVM produced a better species classification (overall accuracy 82.27%, kappa statistic 0.79) than SAM (overall accuracy 74.68%, kappa statistic 0.70). It was noticed that classification accuracy achieved with Hyperion bands was significantly higher than Landsat TM bands (overall accuracy 69.62%, kappa statistic 0.65). Study demonstrated the potential utility of narrow spectral bands of Hyperion data in discriminating tree species in a hilly terrain.
Niki, Yasuo; Takeda, Yuki; Harato, Kengo; Suda, Yasunori
2015-11-01
Achievement of very deep knee flexion after total knee arthroplasty (TKA) can play a critical role in the satisfaction of patients who demand a floor-sitting lifestyle and engage in high-flexion daily activities (e.g., seiza-sitting). Seiza-sitting is characterized by the knees flexed >145º and feet turned sole upwards underneath the buttocks with the tibia internally rotated. The present study investigated factors affecting the achievement of seiza-sitting after TKA using posterior-stabilized total knee prosthesis with high-flex knee design. Subjects comprised 32 patients who underwent TKA with high-flex knee prosthesis and achieved seiza-sitting (knee flexion >145º) postoperatively. Another 32 patients served as controls who were capable of knee flexion >145º preoperatively, but failed to achieve seiza-sitting postoperatively. Accuracy of femoral and tibial component positions was assessed in terms of deviation from the ideal position using a two-dimensional to three-dimensional matching technique. Accuracies of the component position, posterior condylar offset ratio and intraoperative gap length were compared between the two groups. The proportion of patients with >3º internally rotated tibial component was significantly higher in patients who failed at seiza-sitting (41 %) than among patients who achieved it (13 %, p = 0.021). Comparison of intraoperative gap length between patient groups revealed that gap length at 135º flexion was significantly larger in patients who achieved seiza-sitting (4.2 ± 0.4 mm) than in patients who failed at it (2.7 ± 0.4 mm, p = 0.007). Conversely, no significant differences in gap inclination were seen between the groups. From the perspective of surgical factors, accurate implant positioning, particularly rotational alignment of the tibial component, and maintenance of a sufficient joint gap at 135º flexion appear to represent critical factors for achieving >145º of deep knee flexion after TKA.
Teachers' Judgements of Students' Foreign-Language Achievement
ERIC Educational Resources Information Center
Zhu, Mingjing; Urhahne, Detlef
2015-01-01
Numerous studies have been conducted on the accuracy of teacher judgement in different educational areas such as mathematics, language arts and reading. Teacher judgement of students' foreign-language achievement, however, has been rarely investigated. The study aimed to examine the accuracy of teacher judgement of students' foreign-language…
Leigh, S; Idris, I; Collins, B; Granby, P; Noble, M; Parker, M
2016-05-01
To determine the cost-effectiveness of all options for the self-monitoring of blood glucose funded by the National Health Service, providing guidance for disinvestment and testing the hypothesis that advanced meter features may justify higher prices. Using data from the Health and Social Care Information Centre concerning all 8 340 700 self-monitoring of blood glucose-related prescriptions during 2013/2014, we conducted a cost-minimization analysis, considering both strip and lancet costs, including all clinically equivalent technologies for self-monitoring of blood glucose, as determined by the ability to meet ISO-15197:2013 guidelines for meter accuracy. A total of 56 glucose monitor, test strip and lancet combinations were identified, of which 38 met the required accuracy standards. Of these, the mean (range) net ingredient costs for test strips and lancets were £0.27 (£0.14-£0.32) and £0.04 (£0.02-£0.05), respectively, resulting in a weighted average of £0.28 (£0.18-£0.37) per test. Systems providing four or more advanced features were priced equal to those providing just one feature. A total of £12 m was invested in providing 42 million self-monitoring of blood glucose tests with systems that fail to meet acceptable accuracy standards, and efficiency savings of £23.2 m per annum are achievable if the National Health Service were to disinvest from technologies providing lesser functionality than available alternatives, but at a much higher price. The study uncovered considerable variation in the price paid by the National Health Service for self-monitoring of blood glucose, which could not be explained by the availability of advanced meter features. A standardized approach to self-monitoring of blood glucose prescribing could achieve significant efficiency savings for the National Health Service, whilst increasing overall utilisation and improving safety for those currently using systems that fail to meet acceptable standards for measurement accuracy. © 2015 Diabetes UK.
Subject-Adaptive Real-Time Sleep Stage Classification Based on Conditional Random Field
Luo, Gang; Min, Wanli
2007-01-01
Sleep staging is the pattern recognition task of classifying sleep recordings into sleep stages. This task is one of the most important steps in sleep analysis. It is crucial for the diagnosis and treatment of various sleep disorders, and also relates closely to brain-machine interfaces. We report an automatic, online sleep stager using electroencephalogram (EEG) signal based on a recently-developed statistical pattern recognition method, conditional random field, and novel potential functions that have explicit physical meanings. Using sleep recordings from human subjects, we show that the average classification accuracy of our sleep stager almost approaches the theoretical limit and is about 8% higher than that of existing systems. Moreover, for a new subject snew with limited training data Dnew, we perform subject adaptation to improve classification accuracy. Our idea is to use the knowledge learned from old subjects to obtain from Dnew a regulated estimate of CRF’s parameters. Using sleep recordings from human subjects, we show that even without any Dnew, our sleep stager can achieve an average classification accuracy of 70% on snew. This accuracy increases with the size of Dnew and eventually becomes close to the theoretical limit. PMID:18693884
Silva, Richardson Augusto Rosendo da; Costa, Mayara Mirna do Nascimento; Souza, Vinicius Lino de; Silva, Bárbara Coeli Oliveira da; Costa, Cristiane da Silva; Andrade, Itaísa Fernandes Cardoso de
2017-10-30
to evaluate the accuracy of the defining characteristics of the NANDA International nursing diagnosis, noncompliance, in people with HIV. study of diagnostic accuracy, performed in two stages. In the first stage, 113 people with HIV from a hospital of infectious diseases in the Northeast of Brazil were assessed for identification of clinical indicators of noncompliance. In the second, the defining characteristics were evaluated by six specialist nurses, analyzing the presence or absence of the diagnosis. For accuracy of the clinical indicators, the specificity, sensitivity, predictive values and likelihood ratios were measured. the presence of the noncompliance diagnosis was shown in 69% (n=78) of people with HIV. The most sensitive indicator was, missing of appointments (OR: 28.93, 95% CI: 1.112-2.126, p = 0.002). On the other hand, nonadherence behavior (OR: 15.00, 95% CI: 1.829-3.981, p = 0.001) and failure to meet outcomes (OR: 13.41; 95% CI: 1.272-2.508; P = 0.003) achieved higher specificity. the most accurate defining characteristics were nonadherence behavior, missing of appointments, and failure to meet outcomes. Thus, in the presence of these, the nurse can identify, with greater security, the diagnosis studied.
NASA Astrophysics Data System (ADS)
Samsudin, Sarah Hanim; Shafri, Helmi Z. M.; Hamedianfar, Alireza
2016-04-01
Status observations of roofing material degradation are constantly evolving due to urban feature heterogeneities. Although advanced classification techniques have been introduced to improve within-class impervious surface classifications, these techniques involve complex processing and high computation times. This study integrates field spectroscopy and satellite multispectral remote sensing data to generate degradation status maps of concrete and metal roofing materials. Field spectroscopy data were used as bases for selecting suitable bands for spectral index development because of the limited number of multispectral bands. Mapping methods for roof degradation status were established for metal and concrete roofing materials by developing the normalized difference concrete condition index (NDCCI) and the normalized difference metal condition index (NDMCI). Results indicate that the accuracies achieved using the spectral indices are higher than those obtained using supervised pixel-based classification. The NDCCI generated an accuracy of 84.44%, whereas the support vector machine (SVM) approach yielded an accuracy of 73.06%. The NDMCI obtained an accuracy of 94.17% compared with 62.5% for the SVM approach. These findings support the suitability of the developed spectral index methods for determining roof degradation statuses from satellite observations in heterogeneous urban environments.
Two high accuracy digital integrators for Rogowski current transducers.
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
Two high accuracy digital integrators for Rogowski current transducers
NASA Astrophysics Data System (ADS)
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
A hybrid three-class brain-computer interface system utilizing SSSEPs and transient ERPs
NASA Astrophysics Data System (ADS)
Breitwieser, Christian; Pokorny, Christoph; Müller-Putz, Gernot R.
2016-12-01
Objective. This paper investigates the fusion of steady-state somatosensory evoked potentials (SSSEPs) and transient event-related potentials (tERPs), evoked through tactile simulation on the left and right-hand fingertips, in a three-class EEG based hybrid brain-computer interface. It was hypothesized, that fusing the input signals leads to higher classification rates than classifying tERP and SSSEP individually. Approach. Fourteen subjects participated in the studies, consisting of a screening paradigm to determine person dependent resonance-like frequencies and a subsequent online paradigm. The whole setup of the BCI system was based on open interfaces, following suggestions for a common implementation platform. During the online experiment, subjects were instructed to focus their attention on the stimulated fingertips as indicated by a visual cue. The recorded data were classified during runtime using a multi-class shrinkage LDA classifier and the outputs were fused together applying a posterior probability based fusion. Data were further analyzed offline, involving a combined classification of SSSEP and tERP features as a second fusion principle. The final results were tested for statistical significance applying a repeated measures ANOVA. Main results. A significant classification increase was achieved when fusing the results with a combined classification compared to performing an individual classification. Furthermore, the SSSEP classifier was significantly better in detecting a non-control state, whereas the tERP classifier was significantly better in detecting control states. Subjects who had a higher relative band power increase during the screening session also achieved significantly higher classification results than subjects with lower relative band power increase. Significance. It could be shown that utilizing SSSEP and tERP for hBCIs increases the classification accuracy and also that tERP and SSSEP are not classifying control- and non-control states with the same level of accuracy.
How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy
Weaver, Steven A.; Ucar, Zennure; Bettinger, Pete; Merry, Krista
2015-01-01
The static horizontal position accuracy of a mapping-grade GNSS receiver was tested in two forest types over two seasons, and subsequently was tested in one forest type against open sky conditions in the winter season. The main objective was to determine whether the holding position during data collection would result in significantly different static horizontal position accuracy. Additionally, we wanted to determine whether the time of year (season), forest type, or environmental variables had an influence on accuracy. In general, the F4Devices Flint GNSS receiver was found to have mean static horizontal position accuracy levels within the ranges typically expected for this general type of receiver (3 to 5 m) when differential correction was not employed. When used under forest cover, in some cases the GNSS receiver provided a higher level of static horizontal position accuracy when held vertically, as opposed to held at an angle or horizontally (the more natural positions), perhaps due to the orientation of the antenna within the receiver, or in part due to multipath or the inability to use certain satellite signals. Therefore, due to the fact that numerous variables may affect static horizontal position accuracy, we only conclude that there is weak to moderate evidence that the results of holding position are significant. Statistical test results also suggest that the season of data collection had no significant effect on static horizontal position accuracy, and results suggest that atmospheric variables had weak correlation with horizontal position accuracy. Forest type was found to have a significant effect on static horizontal position accuracy in one aspect of one test, yet otherwise there was little evidence that forest type affected horizontal position accuracy. Since the holding position was found in some cases to be significant with regard to the static horizontal position accuracy of positions collected in forests, it may be beneficial to have an understanding of antenna positioning within the receiver to achieve the greatest accuracy during data collection. PMID:25923667
How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy.
Weaver, Steven A; Ucar, Zennure; Bettinger, Pete; Merry, Krista
2015-01-01
The static horizontal position accuracy of a mapping-grade GNSS receiver was tested in two forest types over two seasons, and subsequently was tested in one forest type against open sky conditions in the winter season. The main objective was to determine whether the holding position during data collection would result in significantly different static horizontal position accuracy. Additionally, we wanted to determine whether the time of year (season), forest type, or environmental variables had an influence on accuracy. In general, the F4Devices Flint GNSS receiver was found to have mean static horizontal position accuracy levels within the ranges typically expected for this general type of receiver (3 to 5 m) when differential correction was not employed. When used under forest cover, in some cases the GNSS receiver provided a higher level of static horizontal position accuracy when held vertically, as opposed to held at an angle or horizontally (the more natural positions), perhaps due to the orientation of the antenna within the receiver, or in part due to multipath or the inability to use certain satellite signals. Therefore, due to the fact that numerous variables may affect static horizontal position accuracy, we only conclude that there is weak to moderate evidence that the results of holding position are significant. Statistical test results also suggest that the season of data collection had no significant effect on static horizontal position accuracy, and results suggest that atmospheric variables had weak correlation with horizontal position accuracy. Forest type was found to have a significant effect on static horizontal position accuracy in one aspect of one test, yet otherwise there was little evidence that forest type affected horizontal position accuracy. Since the holding position was found in some cases to be significant with regard to the static horizontal position accuracy of positions collected in forests, it may be beneficial to have an understanding of antenna positioning within the receiver to achieve the greatest accuracy during data collection.
Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation
NASA Astrophysics Data System (ADS)
Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.
2012-07-01
This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Li, Z.; Ng, C.
The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less
Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2015-01-01
The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.
Is multiple-sequence alignment required for accurate inference of phylogeny?
Höhl, Michael; Ragan, Mark A
2007-04-01
The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.
A Low-Visibility Force Multiplier: Assessing China’s Cruise Missile Ambitions
2014-04-01
terminal sensor to achieve 10–15 meter (m) accuracy. • The second-generation DH-10 has a GPS/inertial guidance system but may also use terrain...contour mapping for redundant midcourse guidance and a digital scene-matching sensor to permit an accuracy of 10 m. • Development of the Chinese Beidou...pictures of the target as seen from different perspectives. DSMAC permits LACMs to achieve accuracies of about 1 m. Other (for example, thermal) sensors
Rotational spectroscopy of cold and trapped molecular ions in the Lamb-Dicke regime
NASA Astrophysics Data System (ADS)
Alighanbari, S.; Hansen, M. G.; Korobov, V. I.; Schiller, S.
2018-06-01
Sympathetic cooling of trapped ions has been established as a powerful technique for the manipulation of non-laser-coolable ions1-4. For molecular ions, it promises vastly enhanced spectroscopic resolution and accuracy. However, this potential remains untapped so far, with the best resolution achieved being not better than 5 × 10-8 fractionally, due to residual Doppler broadening being present in ion clusters even at the lowest achievable translational temperatures5. Here we introduce a general and accessible approach that enables Doppler-free rotational spectroscopy. It makes use of the strong radial spatial confinement of molecular ions when trapped and crystallized in a linear quadrupole trap, providing the Lamb-Dicke regime for rotational transitions. We achieve a linewidth of 1 × 10-9 fractionally and 1.3 kHz absolute, an improvement of ≃50-fold over the previous highest resolution in rotational spectroscopy. As an application, we demonstrate the most precise test of ab initio molecular theory and the most accurate (1.3 × 10-9) determination of the proton mass using molecular spectroscopy. The results represent the long overdue extension of Doppler-free microwave spectroscopy of laser-cooled atomic ion clusters6 to higher spectroscopy frequencies and to molecules. This approach enables a wide range of high-accuracy measurements on molecules, both on rotational and, as we project, vibrational transitions.
Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao
2014-01-01
Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.
Translational Imaging Spectroscopy for Proximal Sensing
Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian
2017-01-01
Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111
Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition
Durmaz Incel, Ozlem
2015-01-01
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution. PMID:26445046
NASA Astrophysics Data System (ADS)
Lin, Hsin-Hon; Chang, Hao-Ting; Chao, Tsi-Chian; Chuang, Keh-Shih
2017-08-01
In vivo range verification plays an important role in proton therapy to fully utilize the benefits of the Bragg peak (BP) for delivering high radiation dose to tumor, while sparing the normal tissue. For accurately locating the position of BP, camera equipped with collimators (multi-slit and knife-edge collimator) to image prompt gamma (PG) emitted along the proton tracks in the patient have been proposed for range verification. The aim of the work is to compare the performance of multi-slit collimator and knife-edge collimator for non-invasive proton beam range verification. PG imaging was simulated by a validated GATE/GEANT4 Monte Carlo code to model the spot-scanning proton therapy and cylindrical PMMA phantom in detail. For each spot, 108 protons were simulated. To investigate the correlation between the acquired PG profile and the proton range, the falloff regions of PG profiles were fitted with a 3-line-segment curve function as the range estimate. Factors including the energy window setting, proton energy, phantom size, and phantom shift that may influence the accuracy of detecting range were studied. Results indicated that both collimator systems achieve reasonable accuracy and good response to the phantom shift. The accuracy of range predicted by multi-slit collimator system is less affected by the proton energy, while knife-edge collimator system can achieve higher detection efficiency that lead to a smaller deviation in predicting range. We conclude that both collimator systems have potentials for accurately range monitoring in proton therapy. It is noted that neutron contamination has a marked impact on range prediction of the two systems, especially in multi-slit system. Therefore, a neutron reduction technique for improving the accuracy of range verification of proton therapy is needed.
Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition.
Incel, Ozlem Durmaz
2015-10-05
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution.
D Modelling with the Samsung Gear 360
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-02-01
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
Potential accuracy of translation estimation between radar and optical images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2015-10-01
This paper investigates the potential accuracy achievable for optical to radar image registration by area-based approach. The analysis is carried out mainly based on the Cramér-Rao Lower Bound (CRLB) on translation estimation accuracy previously proposed by the authors and called CRLBfBm. This bound is now modified to take into account radar image speckle noise properties: spatial correlation and signal-dependency. The newly derived theoretical bound is fed with noise and texture parameters estimated for the co-registered pair of optical Landsat 8 and radar SIR-C images. It is found that difficulty of optical to radar image registration stems more from speckle noise influence than from dissimilarity of the considered kinds of images. At finer scales (and higher speckle noise level), probability of finding control fragments (CF) suitable for registration is low (1% or less) but overall number of such fragments is high thanks to image size. Conversely, at the coarse scale, where speckle noise level is reduced, probability of finding CFs suitable for registration can be as high as 40%, but overall number of such CFs is lower. Thus, the study confirms and supports area-based multiresolution approach for optical to radar registration where coarse scales are used for fast registration "lock" and finer scales for reaching higher registration accuracy. The CRLBfBm is found inaccurate for the main scale due to intensive speckle noise influence. For other scales, the validity of the CRLBfBm bound is confirmed by calculating statistical efficiency of area-based registration method based on normalized correlation coefficient (NCC) measure that takes high values of about 25%.
Interactional Effects of Instructional Quality and Teacher Judgement Accuracy on Achievement.
ERIC Educational Resources Information Center
Helmke, Andreas; Schrader, Friedrich-Wilhelm
1987-01-01
Analysis of predictions of 32 teachers regarding 690 fifth-graders' scores on a mathematics achievement test found that the combination of high judgement accuracy with varied instructional techniques was particularly favorable to students in contrast to a combination of high diagnostic sensitivity with a low frequency of cues or individual…
Resolution limits of ultrafast ultrasound localization microscopy
NASA Astrophysics Data System (ADS)
Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael
2015-11-01
As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.
Blood vessel segmentation in color fundus images based on regional and Hessian features.
Shah, Syed Ayaz Ali; Tang, Tong Boon; Faye, Ibrahima; Laude, Augustinus
2017-08-01
To propose a new algorithm of blood vessel segmentation based on regional and Hessian features for image analysis in retinal abnormality diagnosis. Firstly, color fundus images from the publicly available database DRIVE were converted from RGB to grayscale. To enhance the contrast of the dark objects (blood vessels) against the background, the dot product of the grayscale image with itself was generated. To rectify the variation in contrast, we used a 5 × 5 window filter on each pixel. Based on 5 regional features, 1 intensity feature and 2 Hessian features per scale using 9 scales, we extracted a total of 24 features. A linear minimum squared error (LMSE) classifier was trained to classify each pixel into a vessel or non-vessel pixel. The DRIVE dataset provided 20 training and 20 test color fundus images. The proposed algorithm achieves a sensitivity of 72.05% with 94.79% accuracy. Our proposed algorithm achieved higher accuracy (0.9206) at the peripapillary region, where the ocular manifestations in the microvasculature due to glaucoma, central retinal vein occlusion, etc. are most obvious. This supports the proposed algorithm as a strong candidate for automated vessel segmentation.
Rear-end vision-based collision detection system for motorcyclists
NASA Astrophysics Data System (ADS)
Muzammel, Muhammad; Yusoff, Mohd Zuki; Meriaudeau, Fabrice
2017-05-01
In many countries, the motorcyclist fatality rate is much higher than that of other vehicle drivers. Among many other factors, motorcycle rear-end collisions are also contributing to these biker fatalities. To increase the safety of motorcyclists and minimize their road fatalities, this paper introduces a vision-based rear-end collision detection system. The binary road detection scheme contributes significantly to reduce the negative false detections and helps to achieve reliable results even though shadows and different lane markers are present on the road. The methodology is based on Harris corner detection and Hough transform. To validate this methodology, two types of dataset are used: (1) self-recorded datasets (obtained by placing a camera at the rear end of a motorcycle) and (2) online datasets (recorded by placing a camera at the front of a car). This method achieved 95.1% accuracy for the self-recorded dataset and gives reliable results for the rear-end vehicle detections under different road scenarios. This technique also performs better for the online car datasets. The proposed technique's high detection accuracy using a monocular vision camera coupled with its low computational complexity makes it a suitable candidate for a motorbike rear-end collision detection system.
Geomagnetic referencing in the arctic environment
Podjono, Benny; Beck, Nathan; Buchanan, Andrew; Brink, Jason; Longo, Joseph; Finn, Carol A.; Worthington, E. William
2011-01-01
Geomagnetic referencing is becoming an increasingly attractive alternative to north-seeking gyroscopic surveys to achieve the precise wellbore positioning essential for success in today's complex drilling programs. However, the greater magnitude of variations in the geomagnetic environment at higher latitudes makes the application of geomagnetic referencing in those areas more challenging. Precise, real-time data on those variations from relatively nearby magnetic observatories can be crucial to achieving the required accuracy, but constructing and operating an observatory in these often harsh environments poses a number of significant challenges. Operational since March 2010, the Deadhorse Magnetic Observatory (DED), located in Deadhorse, Alaska, was created through collaboration between the United States Geological Survey (USGS) and a leading oilfield services supply company. DED was designed to produce real-time geomagnetic data at the required level of accuracy, and to do so reliably under the extreme temperatures and harsh weather conditions often experienced in the area. The observatory will serve a number of key scientific communities as well as the oilfield drilling industry, and has already played a vital role in the success of several commercial ventures in the area, providing essential, accurate data while offering significant cost and time savings, compared with traditional surveying techniques.
Geomagnetic referencing in the arctic environment
Poedjono, B.; Beck, N.; Buchanan, A. C.; Brink, J.; Longo, J.; Finn, C.A.; Worthington, E.W.
2011-01-01
Geomagnetic referencing is becoming an increasingly attractive alternative to north-seeking gyroscopic surveys to achieve the precise wellbore positioning essential for success in today's complex drilling programs. However, the greater magnitude of variations in the geomagnetic environment at higher latitudes makes the application of geomagnetic referencing in those areas more challenging. Precise, real-time data on those variations from relatively nearby magnetic observatories can be crucial to achieving the required accuracy, but constructing and operating an observatory in these often harsh environments poses a number of significant challenges. Operational since March 2010, the Deadhorse Magnetic Observatory (DED), located in Deadhorse, Alaska, was created through collaboration between the United States Geological Survey (USGS) and a leading oilfield services supply company. DED was designed to produce real-time geomagnetic data at the required level of accuracy, and to do so reliably under the extreme temperatures and harsh weather conditions often experienced in the area. The observatory will serve a number of key scientific communities as well as the oilfield drilling industry, and has already played a vital role in the success of several commercial ventures in the area, providing essential, accurate data while offering significant cost and time savings, compared with traditional surveying techniques. Copyright 2011, Society of Petroleum Engineers.
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
Huynh, Roy; Ip, Matthew; Chang, Jeff; Haifer, Craig; Leong, Rupert W
2018-01-01
Confocal laser endomicroscopy (CLE) allows mucosal barrier defects along the intestinal epithelium to be visualized in vivo during endoscopy. Training in CLE interpretation can be achieved didactically or through self-directed learning. This study aimed to compare the effectiveness of expert-led didactic with self-directed audiovisual teaching for training inexperienced analysts on how to recognize mucosal barrier defects on endoscope-based CLE (eCLE). This randomized controlled study involved trainee analysts who were taught how to recognize mucosal barrier defects on eCLE either didactically or through an audiovisual clip. After being trained, they evaluated 6 sets of 30 images. Image evaluation required the trainees to determine whether specific features of barrier dysfunction were present or not. Trainees in the didactic group engaged in peer discussion and received feedback after each set while this did not happen in the self-directed group. Accuracy, sensitivity, and specificity of both groups were compared. Trainees in the didactic group achieved a higher overall accuracy (87.5 % vs 85.0 %, P = 0.002) and sensitivity (84.5 % vs 80.4 %, P = 0.002) compared to trainees in the self-directed group. Interobserver agreement was higher in the didactic group (k = 0.686, 95 % CI 0.680 - 0.691, P < 0.001) than in the self-directed group (k = 0.566, 95 % CI 0.559 - 0.573, P < 0.001). Confidence (OR 6.48, 95 % CI 5.35 - 7.84, P < 0.001) and good image quality (OR 2.58, 95 % CI 2.17 - 2.82, P < 0.001) were positive predictors of accuracy. Expert-led didactic training is more effective than self-directed audiovisual training for teaching inexperienced analysts how to recognize mucosal barrier defects on eCLE.
Number of Biopsies in Diagnosing Pulmonary Nodules
Wehrschuetz, M.; Wehrschuetz, E.; Portugaller, H. R.
2010-01-01
Purpose: To determine the number of specimens to be obtained from pulmonary lesions to get the highest possible accuracy in histological work-up. Materials and methods: A retrospective evaluation (January 1999 to April 2004) covered 260 patients with thoracic lesions who underwent computer tomography (CT)-guided core-cut biopsy in coaxial technique. All biopsies were performed utilizing a 19 gauge introducer needle and a 20 gauge core-cut biopsy needle. In all, 669 usable biopsies were taken (from 1–5 biopsies in each setting). The specimens were marked sequentially and each biopsy was worked up histologicaly. The biopsy results were correlated to histology after surgery, clinical follow-up or autopsy. The number of biopsies was determined that is necessary to achieve the highest possible accuracy in diagnosing pulmonary lesions. Results: In 591 of 669 biopsies (88.3%), there were correct positive results. The overall accuracy was 87.4%. In 193 of 260 (74.2%) patients, a suspected malignancy was confirmed. In 50 of 260 (19.2%) patients, a benign lesion was correctly diagnosed. Seventeen (6.5%) patients were lost to follow-up. The first, second and third biopsies had cumulative accuracies of 63.6%, 89.2% and 91.5%, respectively (P < 0.02). More biopsies did not show any higher impact on accuracy. Conclusion: For the highest possible accuracy in diagnosing pulmonary lesions by CT-guided core-cut biopsy, at least three usable specimens are recommended to be taken. PMID:21157523
Beran, Gregory J O; Hartman, Joshua D; Heit, Yonaton N
2016-11-15
Molecular crystals occur widely in pharmaceuticals, foods, explosives, organic semiconductors, and many other applications. Thanks to substantial progress in electronic structure modeling of molecular crystals, attention is now shifting from basic crystal structure prediction and lattice energy modeling toward the accurate prediction of experimentally observable properties at finite temperatures and pressures. This Account discusses how fragment-based electronic structure methods can be used to model a variety of experimentally relevant molecular crystal properties. First, it describes the coupling of fragment electronic structure models with quasi-harmonic techniques for modeling the thermal expansion of molecular crystals, and what effects this expansion has on thermochemical and mechanical properties. Excellent agreement with experiment is demonstrated for the molar volume, sublimation enthalpy, entropy, and free energy, and the bulk modulus of phase I carbon dioxide when large basis second-order Møller-Plesset perturbation theory (MP2) or coupled cluster theories (CCSD(T)) are used. In addition, physical insight is offered into how neglect of thermal expansion affects these properties. Zero-point vibrational motion leads to an appreciable expansion in the molar volume; in carbon dioxide, it accounts for around 30% of the overall volume expansion between the electronic structure energy minimum and the molar volume at the sublimation point. In addition, because thermal expansion typically weakens the intermolecular interactions, neglecting thermal expansion artificially stabilizes the solid and causes the sublimation enthalpy to be too large at higher temperatures. Thermal expansion also frequently weakens the lower-frequency lattice phonon modes; neglecting thermal expansion causes the entropy of sublimation to be overestimated. Interestingly, the sublimation free energy is less significantly affected by neglecting thermal expansion because the systematic errors in the enthalpy and entropy cancel somewhat. Second, because solid state nuclear magnetic resonance (NMR) plays an increasingly important role in molecular crystal studies, this Account discusses how fragment methods can be used to achieve higher-accuracy chemical shifts in molecular crystals. Whereas widely used plane wave density functional theory models are largely restricted to generalized gradient approximation (GGA) functionals like PBE in practice, fragment methods allow the routine use of hybrid density functionals with only modest increases in computational cost. In extensive molecular crystal benchmarks, hybrid functionals like PBE0 predict chemical shifts with 20-30% higher accuracy than GGAs, particularly for 1 H, 13 C, and 15 N nuclei. Due to their higher sensitivity to polarization effects, 17 O chemical shifts prove slightly harder to predict with fragment methods. Nevertheless, the fragment model results are still competitive with those from GIPAW. The improved accuracy achievable with fragment approaches and hybrid density functionals increases discrimination between different potential assignments of individual shifts or crystal structures, which is critical in NMR crystallography applications. This higher accuracy and greater discrimination are highlighted in application to the solid state NMR of different acetaminophen and testosterone crystal forms.
[A new peak detection algorithm of Raman spectra].
Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing
2014-01-01
The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.
NASA Astrophysics Data System (ADS)
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
Automatic sub-pixel coastline extraction based on spectral mixture analysis using EO-1 Hyperion data
NASA Astrophysics Data System (ADS)
Hong, Zhonghua; Li, Xuesu; Han, Yanling; Zhang, Yun; Wang, Jing; Zhou, Ruyan; Hu, Kening
2018-06-01
Many megacities (such as Shanghai) are located in coastal areas, therefore, coastline monitoring is critical for urban security and urban development sustainability. A shoreline is defined as the intersection between coastal land and a water surface and features seawater edge movements as tides rise and fall. Remote sensing techniques have increasingly been used for coastline extraction; however, traditional hard classification methods are performed only at the pixel-level and extracting subpixel accuracy using soft classification methods is both challenging and time consuming due to the complex features in coastal regions. This paper presents an automatic sub-pixel coastline extraction method (ASPCE) from high-spectral satellite imaging that performs coastline extraction based on spectral mixture analysis and, thus, achieves higher accuracy. The ASPCE method consists of three main components: 1) A Water- Vegetation-Impervious-Soil (W-V-I-S) model is first presented to detect mixed W-V-I-S pixels and determine the endmember spectra in coastal regions; 2) The linear spectral mixture unmixing technique based on Fully Constrained Least Squares (FCLS) is applied to the mixed W-V-I-S pixels to estimate seawater abundance; and 3) The spatial attraction model is used to extract the coastline. We tested this new method using EO-1 images from three coastal regions in China: the South China Sea, the East China Sea, and the Bohai Sea. The results showed that the method is accurate and robust. Root mean square error (RMSE) was utilized to evaluate the accuracy by calculating the distance differences between the extracted coastline and the digitized coastline. The classifier's performance was compared with that of the Multiple Endmember Spectral Mixture Analysis (MESMA), Mixture Tuned Matched Filtering (MTMF), Sequential Maximum Angle Convex Cone (SMACC), Constrained Energy Minimization (CEM), and one classical Normalized Difference Water Index (NDWI). The results from the three test sites indicated that the proposed ASPCE method extracted coastlines more efficiently than did the compared methods, and its coastline extraction accuracy corresponded closely to the digitized coastline, with 0.39 pixels, 0.40 pixels, and 0.35 pixels in the three test regions, showing that the ASPCE method achieves an accuracy below 12.0 m (0.40 pixels). Moreover, in the quantitative accuracy assessment for the three test sites, the ASPCE method shows the best performance in coastline extraction, achieving a 0.35 pixel-level at the Bohai Sea, China test site. Therefore, the proposed ASPCE method can extract coastline more accurately than can the hard classification methods or other spectral unmixing methods.
Discrimination of natural and cultivated vegetation using Thematic Mapper spectral data
NASA Technical Reports Server (NTRS)
Degloria, Stephen D.; Bernstein, Ralph; Dizenzo, Silvano
1986-01-01
The availability of high quality spectral data from the current suite of earth observation satellite systems offers significant improvements in the ability to survey and monitor food and fiber production on both a local and global basis. Current research results indicate that Landsat TM data when used in either digital or analog formats achieve higher land-cover classification accuracies than MSS data using either comparable or improved spectral bands and spatial resolution. A review of these quantitative results is presented for both natural and cultivated vegetation.
Unsupervised chunking based on graph propagation from bilingual corpus.
Zhu, Ling; Wong, Derek F; Chao, Lidia S
2014-01-01
This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.
NASA Technical Reports Server (NTRS)
1976-01-01
This report covers the development of a three channel Hall effect position sensing system for the commutation of a three phase dc torquer motor. The effort consisted of the evaluation, modification and re-packaging of a commercial position sensor and the design of a target configuration unique to this application. The resulting design meets the contract requirements and, furthermore, the test results indicate not only the practicality and versatility of the design, but also that there may be higher limits of resolution and accuracy achievable.
NASA Technical Reports Server (NTRS)
Ito, K.
1983-01-01
Approximation schemes based on Legendre-tau approximation are developed for application to parameter identification problem for delay and partial differential equations. The tau method is based on representing the approximate solution as a truncated series of orthonormal functions. The characteristic feature of the Legendre-tau approach is that when the solution to a problem is infinitely differentiable, the rate of convergence is faster than any finite power of 1/N; higher accuracy is thus achieved, making the approach suitable for small N.
Hybrid classical/quantum simulation for infrared spectroscopy of water
NASA Astrophysics Data System (ADS)
Maekawa, Yuki; Sasaoka, Kenji; Ube, Takuji; Ishiguro, Takashi; Yamamoto, Takahiro
2018-05-01
We have developed a hybrid classical/quantum simulation method to calculate the infrared (IR) spectrum of water. The proposed method achieves much higher accuracy than conventional classical molecular dynamics (MD) simulations at a much lower computational cost than ab initio MD simulations. The IR spectrum of water is obtained as an ensemble average of the eigenvalues of the dynamical matrix constructed by ab initio calculations, using the positions of oxygen atoms that constitute water molecules obtained from the classical MD simulation. The calculated IR spectrum is in excellent agreement with the experimental IR spectrum.
"Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines
ERIC Educational Resources Information Center
Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken
2011-01-01
Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.
Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less
Jacobs-Gedrim, Robin B.; Agarwal, Sapan; Knisely, Kathrine E.; ...
2017-12-01
Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaO x, andmore » two conducting metallization systems, Cu-SiO 2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.« less
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
Comparison between multi-constellation ambiguity-fixed PPP and RTK for maritime precise navigation
NASA Astrophysics Data System (ADS)
Tegedor, Javier; Liu, Xianglin; Ørpen, Ole; Treffers, Niels; Goode, Matthew; Øvstedal, Ola
2015-06-01
In order to achieve high-accuracy positioning, either Real-Time Kinematic (RTK) or Precise Point Positioning (PPP) techniques can be used. While RTK normally delivers higher accuracy with shorter convergence times, PPP has been an attractive technology for maritime applications, as it delivers uniform positioning performance without the direct need of a nearby reference station. Traditional PPP has been based on ambiguity-float solutions using GPS and Glonass constellations. However, the addition of new satellite systems, such as Galileo and BeiDou, and the possibility of fixing integer carrier-phase ambiguities (PPP-AR) allow to increase PPP accuracy. In this article, a performance assessment has been done between RTK, PPP and PPP-AR, using GNSS data collected from two antennas installed on a ferry navigating in Oslo (Norway). RTK solutions have been generated using short, medium and long baselines (up to 290 km). For the generation of PPP-AR solutions, Uncalibrated Hardware Delays (UHDs) for GPS, Galileo and BeiDou have been estimated using reference stations in Oslo and Onsala. The performance of RTK and multi-constellation PPP and PPP-AR are presented.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
Nutrigenomics, beta-cell function and type 2 diabetes.
Nino-Fong, R; Collins, Tm; Chan, Cb
2007-03-01
The present investigation was designed to investigate the accuracy and precision of lactate measurement obtained with contemporary biosensors (Chiron Diagnostics, Nova Biomedical) and standard enzymatic photometric procedures (Sigma Diagnostics, Abbott Laboratories, Analyticon). Measurements were performed in vitro before and after the stepwise addition of 1 molar sodium lactate solution to samples of fresh frozen plasma to systematically achieve lactate concentrations of up to 20 mmol/l. Precision of the methods investigated varied between 1% and 7%, accuracy ranged between 2% and -33% with the variability being lowest in the Sigma photometric procedure (6%) and more than 13% in both biosensor methods. Biosensors for lactate measurement provide adequate accuracy in mean with the limitation of highly variable results. A true lactate value of 6 mmol/l was found to be presented between 4.4 and 7.6 mmol/l or even with higher difference. Biosensors and standard enzymatic photometric procedures are only limited comparable because the differences between paired determinations presented to be several mmol. The advantage of biosensors is the complete lack of preanalytical sample preparation which appeared to be the major limitation of standard photometry methods.
Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks.
Dvornek, Nicha C; Ventola, Pamela; Pelphrey, Kevin A; Duncan, James S
2017-09-01
Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.
Li, Linyi; Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.
Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features
Xu, Tingbao; Chen, Yun
2017-01-01
In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440
Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long
2015-05-01
This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.
Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks
Dvornek, Nicha C.; Ventola, Pamela; Pelphrey, Kevin A.; Duncan, James S.
2017-01-01
Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD. PMID:29104967
Kuang, Cuifang; Ali, M Yakut; Hao, Xiang; Wang, Tingting; Liu, Xu
2010-10-01
In order to achieve a higher axial resolution for displacement measurement, a novel method is proposed based on total internal reflection filter and confocal microscope principle. A theoretical analysis of the basic measurement principles is presented. The analysis reveals that the proposed confocal detection scheme is effective in enhancing the resolution of nonlinearity of the reflectance curve greatly. In addition, a simple prototype system has been developed based on the theoretical analysis and a series of experiments have been performed under laboratory conditions to verify the system feasibility, accuracy, and stability. The experimental results demonstrate that the axial resolution in displacement measurements is better than 1 nm in a range of 200 nm which is threefold better than that can be achieved using the plane reflector.
Space telescope scientific instruments
NASA Technical Reports Server (NTRS)
Leckrone, D. S.
1979-01-01
The paper describes the Space Telescope (ST) observatory, the design concepts of the five scientific instruments which will conduct the initial observatory observations, and summarizes their astronomical capabilities. The instruments are the wide-field and planetary camera (WFPC) which will receive the highest quality images, the faint-object camera (FOC) which will penetrate to the faintest limiting magnitudes and achieve the finest angular resolution possible, and the faint-object spectrograph (FOS), which will perform photon noise-limited spectroscopy and spectropolarimetry on objects substantially fainter than those accessible to ground-based spectrographs. In addition, the high resolution spectrograph (HRS) will provide higher spectral resolution with greater photometric accuracy than previously possible in ultraviolet astronomical spectroscopy, and the high-speed photometer will achieve precise time-resolved photometric observations of rapidly varying astronomical sources on short time scales.
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
2011-01-01
Background When a specimen belongs to a species not yet represented in DNA barcode reference libraries there is disagreement over the effectiveness of using sequence comparisons to assign the query accurately to a higher taxon. Library completeness and the assignment criteria used have been proposed as critical factors affecting the accuracy of such assignments but have not been thoroughly investigated. We explored the accuracy of assignments to genus, tribe and subfamily in the Sphingidae, using the almost complete global DNA barcode reference library (1095 species) available for this family. Costa Rican sphingids (118 species), a well-documented, diverse subset of the family, with each of the tribes and subfamilies represented were used as queries. We simulated libraries with different levels of completeness (10-100% of the available species), and recorded assignments (positive or ambiguous) and their accuracy (true or false) under six criteria. Results A liberal tree-based criterion assigned 83% of queries accurately to genus, 74% to tribe and 90% to subfamily, compared to a strict tree-based criterion, which assigned 75% of queries accurately to genus, 66% to tribe and 84% to subfamily, with a library containing 100% of available species (but excluding the species of the query). The greater number of true positives delivered by more relaxed criteria was negatively balanced by the occurrence of more false positives. This effect was most sharply observed with libraries of the lowest completeness where, for example at the genus level, 32% of assignments were false positives with the liberal criterion versus < 1% when using the strict. We observed little difference (< 8% using the liberal criterion) however, in the overall accuracy of the assignments between the lowest and highest levels of library completeness at the tribe and subfamily level. Conclusions Our results suggest that when using a strict tree-based criterion for higher taxon assignment with DNA barcodes, the likelihood of assigning a query a genus name incorrectly is very low, if a genus name is provided it has a high likelihood of being accurate, and if no genus match is available the query can nevertheless be assigned to a subfamily with high accuracy regardless of library completeness. DNA barcoding often correctly assigned sphingid moths to higher taxa when species matches were unavailable, suggesting that barcode reference libraries can be useful for higher taxon assignments long before they achieve complete species coverage. PMID:21806794
NASA Astrophysics Data System (ADS)
Titterington, Lynda C.
2007-12-01
This study presents a framework for examining the effects of higher order thinking on the achievement of allied health students enrolled in a pathophysiology course. A series of clinical case studies was developed and published in an enriched online environment that guided students through the process of developing a solution and supporting it through data analysis and interpretation. The series of case study modules scaffolded argumentation through question prompts. The modules began with a simple, direct problem and they became progressively more complex throughout the quarter. A control group was assigned a pencil-and-paper case study based upon recall. The case studies were scored for content accuracy and evidence of higher order thinking skills. Higher order thinking was measured using a rubric based upon the Toulmin argumentation pattern. The results indicated implementing a case study of either online or traditional format was associated with significant gains in achievement. The Web-enhanced case studies were associated with modest gains in knowledge acquisition. The argumentation scores across the series followed two trends: directed case studies were associated with higher levels of argumentation than ill-structured case studies, and there appeared to be an inverse relationship between the students' argumentation and content scores. The protocols developed for this study can serve as a template for a larger, extended investigation into student learning in the online environment.
van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed
2017-01-01
Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately educated and adequate staffing levels should be maintained. (7) For reporting purposes, uncertainties should be presented. (8) Manufacturers should provide training on all equipment. (9) Research should aid in improving the accuracy of radiotherapy. Some example research projects are suggested.
Accuracy of torque-limiting devices: A comparative evaluation.
Albayrak, Haydar; Gumus, Hasan Onder; Tursun, Funda; Kocaagaoglu, Hasan Huseyin; Kilinc, Halil Ibrahim
2017-01-01
To prevent the loosening of implant screws, clinicians should be aware of the output torque values needed to achieve the desired preload. Accurate torque-control devices are crucial in this regard; however, little information is currently available comparing the accuracy of mechanical with that of electronic torque-control devices. The purpose of this in vitro study was to identify and compare the accuracy of different types of torque-control devices. Devices from 5 different dental implant manufacturers were evaluated, including 2 spring-type (Straumann, Implance) mechanical devices (MTLD), 2 friction-type (Biohorizons, Dyna) MTLDs, and 1 (Megagen) electronic torque-control device (ETLD). For each manufacturer, 5 devices were tested 5 times with a digital torque tester, and the average for each device was calculated and recorded. The percentage of absolute deviations from the target torque values (PERDEV) were calculated and compared by using 1-way ANOVA. A 1-sample t test was used to evaluate the ability of each device to achieve its target torque value within a 95% confidence interval for the true population mean of measured values (α=.05 for all statistical analyses). One-way ANOVAs revealed statistically significant differences among torque-control devices (P<.001). ETLD showed higher PERDEVs (28.33 ±9.53) than MTLDs (P<.05), whereas PERDEVS of friction-type (7.56 ±3.64) and spring-type (10.85 ±4.11) MTLDs did not differ significantly. In addition, devices produced by Megagen had a significantly higher (P<.05) PERDEV (28.33 ±9.53) other devices, whereas no differences were found in devices manufactured by Biohorizons (7.31 ±5.34), Dyna (7.82 ±1.08), Implance (8.43 ±4.77), and Straumann (13.26 ±0.79). However, 1-sample t tests showed none of the torque-control devices evaluated in this study were capable of achieving their target torque values (P<.05). Within the limitations of this in vitro study, MTLDs were shown to be significantly more accurate than ETLDs. However, none of the torque-control devices evaluated were able to meet their target torque values successfully. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.
2012-01-01
A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…
Stallkamp, J; Schraft, R D
2005-01-01
In minimally invasive surgery, a higher degree of accuracy is required by surgeons both for current and for future applications. This could be achieved using either a manipulator or a robot which would undertake selected tasks during surgery. However, a manually-controlled manipulator cannot fully exploit the maximum accuracy and feasibility of three-dimensional motion sequences. Therefore, apart from being used to perform simple positioning tasks, manipulators will probably be replaced by robot systems more and more in the future. However, in order to use a robot, accurate, up-to-date and extensive data is required which cannot yet be acquired by typical sensors such as CT, MRI, US or common x-ray machines. This paper deals with a new sensor and a concept for its application in robot-assisted minimally invasive surgery on soft tissue which could be a solution for data acquisition in future. Copyright 2005 Robotic Publications Ltd.
Use of noncrystallographic symmetry for automated model building at medium to low resolution.
Wiegels, Tim; Lamzin, Victor S
2012-04-01
A novel method is presented for the automatic detection of noncrystallographic symmetry (NCS) in macromolecular crystal structure determination which does not require the derivation of molecular masks or the segmentation of density. It was found that throughout structure determination the NCS-related parts may be differently pronounced in the electron density. This often results in the modelling of molecular fragments of variable length and accuracy, especially during automated model-building procedures. These fragments were used to identify NCS relations in order to aid automated model building and refinement. In a number of test cases higher completeness and greater accuracy of the obtained structures were achieved, specifically at a crystallographic resolution of 2.3 Å or poorer. In the best case, the method allowed the building of up to 15% more residues automatically and a tripling of the average length of the built fragments.
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study
Hosseinyalamdary, Siavash
2018-01-01
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119
Li, Hang; He, Junting; Liu, Qin; Huo, Zhaohui; Liang, Si; Liang, Yong
2011-03-01
A tandem solid-phase extraction method (SPE) of connecting two different cartridges (C(18) and MCX) in series was developed as the extraction procedure in this article, which provided better extraction yields (>86%) for all analytes and more appropriate sample purification from endogenous interference materials compared with a single cartridge. Analyte separation was achieved on a C(18) reversed-phase column at the wavelength of 265 nm by high-performance liquid chromatography (HPLC). The method was validated in terms of extraction yield, precision and accuracy. These assays gave mean accuracy values higher than 89% with RSD values that were always less than 3.8%. The method has been successfully applied to plasma samples from rats after oral administration of target compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
New high-precision drift-tube detectors for the ATLAS muon spectrometer
NASA Astrophysics Data System (ADS)
Kroha, H.; Fakhrutdinov, R.; Kozhin, A.
2017-06-01
Small-diameter muon drift tube (sMDT) detectors have been developed for upgrades of the ATLAS muon spectrometer. With a tube diameter of 15 mm, they provide an about an order of magnitude higher rate capability than the present ATLAS muon tracking detectors, the MDT chambers with 30 mm tube diameter. The drift-tube design and the construction methods have been optimised for mass production and allow for complex shapes required for maximising the acceptance. A record sense wire positioning accuracy of 5 μm has been achieved with the new design. In the serial production, the wire positioning accuracy is routinely better than 10 μm. 14 new sMDT chambers are already operational in ATLAS, further 16 are under construction for installation in the 2019-2020 LHC shutdown. For the upgrade of the barrel muon spectrometer for High-Luminosity LHC, 96 sMDT chambers will be contructed between 2020 and 2024.
GPU-based real-time trinocular stereo vision
NASA Astrophysics Data System (ADS)
Yao, Yuanbin; Linton, R. J.; Padir, Taskin
2013-01-01
Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.
Multi-spectral pyrometer for gas turbine blade temperature measurement
NASA Astrophysics Data System (ADS)
Gao, Shan; Wang, Lixin; Feng, Chi
2014-09-01
To achieve the highest possible turbine inlet temperature requires to accurately measuring the turbine blade temperature. If the temperature of blade frequent beyond the design limits, it will seriously reduce the service life. The problem for the accuracy of the temperature measurement includes the value of the target surface emissivity is unknown and the emissivity model is variability and the thermal radiation of the high temperature environment. In this paper, the multi-spectral pyrometer is designed provided mainly for range 500-1000°, and present a model corrected in terms of the error due to the reflected radiation only base on the turbine geometry and the physical properties of the material. Under different working conditions, the method can reduce the measurement error from the reflect radiation of vanes, make measurement closer to the actual temperature of the blade and calculating the corresponding model through genetic algorithm. The experiment shows that this method has higher accuracy measurements.
Design considerations for a real-time ocular counterroll instrument
NASA Technical Reports Server (NTRS)
Hatamian, M.; Anderson, D. J.
1983-01-01
A real-time algorithm for measuring three-dimensional movement of the human eye, especially torsional movement, is presented. As its input, the system uses images of the eyeball taken at video rate. The amount of horizontal and vertical movement is extracted using a pupil tracking technique. The torsional movement is then measured by computing the discrete cross-correlation function between the circular samples of successive images of the iris patterns and searching for the position of the peak of the function. A local least square interpolation around the peak of the cross-correlation function is used to produce nearly unbiased estimates of torsion angle with accuracy of about 3-4 arcmin. Accuracies of better than 0.03 deg are achievable in torsional measurement with SNR higher than 36 dB. Horizontal and vertical rotations of up to + or - 13 deg can occur simultaneously with torsion without introducing any appreciable error in the counterrolling measurement process.
Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology
NASA Astrophysics Data System (ADS)
Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan
2016-05-01
This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.
NASA Astrophysics Data System (ADS)
Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing
2012-06-01
We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.
Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.
Hosseinyalamdary, Siavash
2018-04-24
Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.
A Novel Energy-Efficient Approach for Human Activity Recognition.
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru
2017-09-08
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.
Sexual differences in behavioral thermoregulation of the lizard Scelarcis perspicillata.
Ortega, Zaida; Mencía, Abraham; Pérez-Mellado, Valentín
2016-10-01
Temperature determines all aspects of the biology of ectotherms. Although sexual differences in thermal ecology are not the rule in lizards, some species exhibit such differences. We studied the effect of sex and reproductive condition on the thermoregulation of an introduced population of Scelarcis perspicillata during the summer in Menorca (Balearic Islands, Spain). These lizards live in the wall surfaces of a limestone quarry, where the sun is scarce because of the narrowness of the quarry walls. The population is sexually dimorphic, with larger males than females. We measured body temperature (T b ) of adult males and females in the field, and air (T a ) and substrate temperature (T s ) at the capture sites, and recorded exposure to sunlight, height of the perch, and type of substrate. We also recorded operative temperatures (T e ) as a null hypothesis of thermoregulation. Finally, we studied the thermal preferences of adult males and females in a laboratory thermal gradient. Thermal preferences were similar for pregnant and non-pregnant females, and sex did not affect the thermal preferences of lizards, even after controlling for the effect of body size. However, in the field, females achieved higher T b than males, and occupied microhabitats with higher T a and T s and lower perch heights than males. Furthermore, females selected perches in full sun at a higher frequency than males. As a consequence, females achieved a higher accuracy and effectiveness of thermoregulation (0.89) than males (0.84). Thus, all else being equal, females would achieve a higher performance than males. The observed results are attributable to sexual differences in behaviour, probably in relation with the reproductive season. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kozak, Igor; Oster, Stephen F; Cortes, Marco A; Dowell, Dennis; Hartmann, Kathrin; Kim, Jae Suk; Freeman, William R
2011-06-01
To evaluate the clinical use and accuracy of a new retinal navigating laser technology that integrates a scanning slit fundus camera system with fluorescein angiography (FA), color, red-free, and infrared imaging capabilities with a computer steerable therapeutic 532-nm laser. Interventional case series. Eighty-six eyes of 61 patients with diabetic retinopathy and macular edema treated by NAVILAS. The imaging included digital color fundus photographs and FA. The planning included graphically marking future treatment sites (microaneurysms for single-spot focal treatment and areas of diffuse leakage for grid pattern photocoagulation) on the acquired images. The preplanned treatment was visible and overlaid on the live fundus image during the actual photocoagulation. The NAVILAS automatically advances the aiming beam location from one planned treatment site to the next after each photocoagulation spot until all sites are treated. Aiming beam stabilization compensated for patient's eye movements. The pretreatment FA with the treatment plan was overlaid on top of the posttreatment color fundus images with the actual laser burns. This allowed treatment accuracy to be calculated. Independent observers evaluated the images to determine if the retinal opacification after treatment overlapped the targeted microaneurysm. Safety and accuracy of laser photocoagulation. The images were of very good quality compared with standard fundus cameras, allowing careful delineation of target areas on FA. Toggling from infrared, to monochromatic, to color view allowed evaluation and adjustment of burn intensity during treatment. There were no complications during or after photocoagulation treatment. An analysis of accuracy of 400 random focal targeted spots found that the NAVILAS achieved a microaneurysm hit rate of 92% when the placement of the treatment circle was centered by the operating surgeon on the microaneurysm. The accuracy for the control group analyzing 100 focal spots was significantly lower at 72% (P<0.01). Laser photocoagulation using the NAVILAS system is safe and achieves a higher rate of accuracy in photocoagulation treatments of diabetic retinopathy lesions than standard manual-technique laser treatment. Precise manual preplanning and positioning of the treatment sites by the surgeon is possible, allowing accurate and predictable photocoagulation of these lesions. Proprietary or commercial disclosure may be found after the references. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R
2007-01-01
Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics datasets. This significantly improves the discrimination between sample classes and has resulted in higher classification accuracies compared to unscaled, autoscaled or Pareto scaled data. Additionally we have confirmed the broad applicability of the glog approach using three disparate datasets from different biological samples using 1D NMR spectra, 1D projections of 2D JRES spectra, and intact 2D JRES spectra. PMID:17605789
Effectiveness of link prediction for face-to-face behavioral networks.
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30-0.45 and a recall of 0.10-0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks.
Nanthagopal, A Padma; Rajamony, R Sukanesh
2012-07-01
The proposed system provides new textural information for segmenting tumours, efficiently and accurately and with less computational time, from benign and malignant tumour images, especially in smaller dimensions of tumour regions of computed tomography (CT) images. Region-based segmentation of tumour from brain CT image data is an important but time-consuming task performed manually by medical experts. The objective of this work is to segment brain tumour from CT images using combined grey and texture features with new edge features and nonlinear support vector machine (SVM) classifier. The selected optimal features are used to model and train the nonlinear SVM classifier to segment the tumour from computed tomography images and the segmentation accuracies are evaluated for each slice of the tumour image. The method is applied on real data of 80 benign, malignant tumour images. The results are compared with the radiologist labelled ground truth. Quantitative analysis between ground truth and the segmented tumour is presented in terms of segmentation accuracy and the overlap similarity measure dice metric. From the analysis and performance measures such as segmentation accuracy and dice metric, it is inferred that better segmentation accuracy and higher dice metric are achieved with the normalized cut segmentation method than with the fuzzy c-means clustering method.
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
2013-07-01
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.
Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2017-01-01
The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.
da Silva, Richardson Augusto Rosendo; Costa, Mayara Mirna do Nascimento; de Souza, Vinicius Lino; da Silva, Bárbara Coeli Oliveira; Costa, Cristiane da Silva; de Andrade, Itaísa Fernandes Cardoso
2017-01-01
ABSTRACT Objective: to evaluate the accuracy of the defining characteristics of the NANDA International nursing diagnosis, noncompliance, in people with HIV. Method: study of diagnostic accuracy, performed in two stages. In the first stage, 113 people with HIV from a hospital of infectious diseases in the Northeast of Brazil were assessed for identification of clinical indicators of noncompliance. In the second, the defining characteristics were evaluated by six specialist nurses, analyzing the presence or absence of the diagnosis. For accuracy of the clinical indicators, the specificity, sensitivity, predictive values and likelihood ratios were measured. Results: the presence of the noncompliance diagnosis was shown in 69% (n=78) of people with HIV. The most sensitive indicator was, missing of appointments (OR: 28.93, 95% CI: 1.112-2.126, p = 0.002). On the other hand, nonadherence behavior (OR: 15.00, 95% CI: 1.829-3.981, p = 0.001) and failure to meet outcomes (OR: 13.41; 95% CI: 1.272-2.508; P = 0.003) achieved higher specificity. Conclusion: the most accurate defining characteristics were nonadherence behavior, missing of appointments, and failure to meet outcomes. Thus, in the presence of these, the nurse can identify, with greater security, the diagnosis studied. PMID:29091125
Limits on the Accuracy of Linking. Research Report. ETS RR-10-22
ERIC Educational Resources Information Center
Haberman, Shelby J.
2010-01-01
Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…
The Credibility of Children's Testimony: Can Children Control the Accuracy of Their Memory Reports?
ERIC Educational Resources Information Center
Koriat, Asher; Goldsmith, Morris; Schneider, Wolfgang; Nakash-Dura, Michal
2001-01-01
Three experiments examined children's strategic regulation of memory accuracy. Found that younger (7 to 9 years) and older (10 to 12 years) children could enhance the accuracy of their testimony by screening out wrong answers under free-report conditions. Findings suggest a developmental trend in level of memory accuracy actually achieved.…
Accuracy of Definitions for Linkage to Care in Persons Living with HIV
KELLER, Sara C.; YEHIA, Baligh R.; EBERHART, Michael G.; BRADY, Kathleen A.
2013-01-01
Objective To compare the accuracy of linkage to care metrics for patients diagnosed with HIV using retention in care and virologic suppression as the gold standards of effective linkage. Design A retrospective cohort study of patients aged 18 and over with newly-diagnosed HIV infection in the City of Philadelphia, 2007 to 2008. Methods Times from diagnosis to clinic visits or laboratory testing were used as linkage measures. Outcome variables included being retained in care and achieving virologic suppression, 366-730 days after diagnosis. Positive predictive value (PPV), negative predictive value (NPV), and area under the curve (AUC) for each linkage measure and retention and virologic suppression outcomes are described. Results Of the 1781 patients in the study, 503 (28.2%) were retained in care in the Ryan White system and 418 (23.5%) achieved virologic suppression 366-730 days after diagnosis. The linkage measure with the highest PPV for retention was having two clinic visits within 365 days of diagnosis, separated by 90 days (74.2%). Having a clinic visit between 21 and 365 days after diagnosis had both the highest NPV for retention (94.5%) and the highest adjusted AUC for retention (0.872). Having two tests within 365 days of diagnosis, separated by 90 days, had the highest adjusted AUC for virologic suppression (0.780). Conclusions Linkage measures associated with clinic visits had higher PPV and NPV for retention, while linkage measures associated with laboratory testing had higher PPV and NPV for retention. Linkage measures should be chosen based on the outcome of interest. PMID:23614992
Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv
2017-02-01
Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.
NASA Astrophysics Data System (ADS)
Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
1999-05-01
A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.
Error-proneness as a handicap signal.
De Jaegher, Kris
2003-09-21
This paper describes two discrete signalling models in which the error-proneness of signals can serve as a handicap signal. In the first model, the direct handicap of sending a high-quality signal is not large enough to assure that a low-quality signaller will not send it. However, if the receiver sometimes mistakes a high-quality signal for a low-quality one, then there is an indirect handicap to sending a high-quality signal. The total handicap of sending such a signal may then still be such that a low-quality signaller would not want to send it. In the second model, there is no direct handicap of sending signals, so that nothing would seem to stop a signaller from always sending a high-quality signal. However, the receiver sometimes fails to detect signals, and this causes an indirect handicap of sending a high-quality signal that still stops the low-quality signaller of sending such a signal. The conditions for honesty are that the probability of an error of detection is higher for a high-quality than for a low-quality signal, and that the signaller who does not detect a signal adopts a response that is bad to the signaller. In both our models, we thus obtain the result that signal accuracy should not lie above a certain level in order for honest signalling to be possible. Moreover, we show that the maximal accuracy that can be achieved is higher the lower the degree of conflict between signaller and receiver. As well, we show that it is the conditions for honest signalling that may be constraining signal accuracy, rather than the signaller trying to make honest signals as effective as possible given receiver psychology, or the signaller adapting the accuracy of honest signals depending on his interests.
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.
Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.
Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J
2018-06-12
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.
Galldiks, Norbert; Stoffels, Gabriele; Filss, Christian; Rapp, Marion; Blau, Tobias; Tscherpel, Caroline; Ceccon, Garry; Dunkl, Veronika; Weinzierl, Martin; Stoffel, Michael; Sabel, Michael; Fink, Gereon R; Shah, Nadim J; Langen, Karl-Josef
2015-09-01
We evaluated the diagnostic value of static and dynamic O-(2-[(18)F]fluoroethyl)-L-tyrosine ((18)F-FET) PET parameters in patients with progressive or recurrent glioma. We retrospectively analyzed 132 dynamic (18)F-FET PET and conventional MRI scans of 124 glioma patients (primary World Health Organization grade II, n = 55; grade III, n = 19; grade IV, n = 50; mean age, 52 ± 14 y). Patients had been referred for PET assessment with clinical signs and/or MRI findings suggestive of tumor progression or recurrence based on Response Assessment in Neuro-Oncology criteria. Maximum and mean tumor/brain ratios of (18)F-FET uptake were determined (20-40 min post-injection) as well as tracer uptake kinetics (ie, time to peak and patterns of the time-activity curves). Diagnoses were confirmed histologically (95%) or by clinical follow-up (5%). Diagnostic accuracies of PET and MR parameters for the detection of tumor progression or recurrence were evaluated by receiver operating characteristic analyses/chi-square test. Tumor progression or recurrence could be diagnosed in 121 of 132 cases (92%). MRI and (18)F-FET PET findings were concordant in 84% and discordant in 16%. Compared with the diagnostic accuracy of conventional MRI to diagnose tumor progression or recurrence (85%), a higher accuracy (93%) was achieved by (18)F-FET PET when a mean tumor/brain ratio ≥2.0 or time to peak <45 min was present (sensitivity, 93%; specificity, 100%; accuracy, 93%; positive predictive value, 100%; P < .001). Static and dynamic (18)F-FET PET parameters differentiate progressive or recurrent glioma from treatment-related nonneoplastic changes with higher accuracy than conventional MRI. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hao; Tan, Shan; Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan
2014-01-01
Purpose: To construct predictive models using comprehensive tumor features for the evaluation of tumor response to neoadjuvant chemoradiation therapy (CRT) in patients with esophageal cancer. Methods and Materials: This study included 20 patients who underwent trimodality therapy (CRT + surgery) and underwent {sup 18}F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) both before and after CRT. Four groups of tumor features were examined: (1) conventional PET/CT response measures (eg, standardized uptake value [SUV]{sub max}, tumor diameter); (2) clinical parameters (eg, TNM stage, histology) and demographics; (3) spatial-temporal PET features, which characterize tumor SUV intensity distribution, spatial patterns, geometry, and associated changesmore » resulting from CRT; and (4) all features combined. An optimal feature set was identified with recursive feature selection and cross-validations. Support vector machine (SVM) and logistic regression (LR) models were constructed for prediction of pathologic tumor response to CRT, cross-validations being used to avoid model overfitting. Prediction accuracy was assessed by area under the receiver operating characteristic curve (AUC), and precision was evaluated by confidence intervals (CIs) of AUC. Results: When applied to the 4 groups of tumor features, the LR model achieved AUCs (95% CI) of 0.57 (0.10), 0.73 (0.07), 0.90 (0.06), and 0.90 (0.06). The SVM model achieved AUCs (95% CI) of 0.56 (0.07), 0.60 (0.06), 0.94 (0.02), and 1.00 (no misclassifications). With the use of spatial-temporal PET features combined with conventional PET/CT measures and clinical parameters, the SVM model achieved very high accuracy (AUC 1.00) and precision (no misclassifications)—results that were significantly better than when conventional PET/CT measures or clinical parameters and demographics alone were used. For groups with many tumor features (groups 3 and 4), the SVM model achieved significantly higher accuracy than did the LR model. Conclusions: The SVM model that used all features including spatial-temporal PET features accurately and precisely predicted pathologic tumor response to CRT in esophageal cancer.« less
2013-12-13
8 U.S. Army Field Artillery Operations ............................................................................ 8 Geodesy ...Experts in this field of study have a full working knowledge of geodesy and the theory that allows mensuration to surpass the level of accuracy achieved...desired. (2) Fire that is intended to achieve the desired result on target.”6 Geodesy : “that branch of applied mathematics which determines by observation
Ranging performance of satellite laser altimeters
NASA Technical Reports Server (NTRS)
Gardner, Chester S.
1992-01-01
Topographic mapping of the earth, moon and planets can be accomplished with high resolution and accuracy using satellite laser altimeters. These systems employ nanosecond laser pulses and microradian beam divergences to achieve submeter vertical range resolution from orbital altitudes of several hundred kilometers. Here, we develop detailed expressions for the range and pulse width measurement accuracies and use the results to evaluate the ranging performances of several satellite laser altimeters currently under development by NASA for launch during the next decade. Our analysis includes the effects of the target surface characteristics, spacecraft pointing jitter and waveform digitizer characteristics. The results show that ranging accuracy is critically dependent on the pointing accuracy and stability of the altimeter especially over high relief terrain where surface slopes are large. At typical orbital altitudes of several hundred kilometers, single-shot accuracies of a few centimeters can be achieved only when the pointing jitter is on the order of 10 mu rad or less.
Ethics: An Indispensable Dimension in the University Rankings.
Khaki Sedigh, Ali
2017-02-01
University ranking systems attempt to provide an ordinal gauge to make an expert evaluation of the university's performance for a general audience. University rankings have always had their pros and cons in the higher education community. Some seriously question the usefulness, accuracy, and lack of consensus in ranking systems and therefore multidimensional ranking systems have been proposed to overcome some shortcomings of the earlier systems. Although the present ranking results may rather be rough, they are the only available sources that illustrate the complex university performance in a tangible format. Their relative accuracy has turned the ranking systems into an essential feature of the academic lifecycle within the foreseeable future. The main concern however, is that the present ranking systems totally neglect the ethical issues involved in university performances. Ethics should be a new dimension added into the university ranking systems, as it is an undisputable right of the public and all the parties involved in higher education to have an ethical evaluation of the university's achievements. In this paper, to initiate ethical assessment and rankings, the main factors involved in the university performances are reviewed from an ethical perspective. Finally, a basic benchmarking model for university ethical performance is presented.
Accuracy of direct genomic values in Holstein bulls and cows using subsets of SNP markers
2010-01-01
Background At the current price, the use of high-density single nucleotide polymorphisms (SNP) genotyping assays in genomic selection of dairy cattle is limited to applications involving elite sires and dams. The objective of this study was to evaluate the use of low-density assays to predict direct genomic value (DGV) on five milk production traits, an overall conformation trait, a survival index, and two profit index traits (APR, ASI). Methods Dense SNP genotypes were available for 42,576 SNP for 2,114 Holstein bulls and 510 cows. A subset of 1,847 bulls born between 1955 and 2004 was used as a training set to fit models with various sets of pre-selected SNP. A group of 297 bulls born between 2001 and 2004 and all cows born between 1992 and 2004 were used to evaluate the accuracy of DGV prediction. Ridge regression (RR) and partial least squares regression (PLSR) were used to derive prediction equations and to rank SNP based on the absolute value of the regression coefficients. Four alternative strategies were applied to select subset of SNP, namely: subsets of the highest ranked SNP for each individual trait, or a single subset of evenly spaced SNP, where SNP were selected based on their rank for ASI, APR or minor allele frequency within intervals of approximately equal length. Results RR and PLSR performed very similarly to predict DGV, with PLSR performing better for low-density assays and RR for higher-density SNP sets. When using all SNP, DGV predictions for production traits, which have a higher heritability, were more accurate (0.52-0.64) than for survival (0.19-0.20), which has a low heritability. The gain in accuracy using subsets that included the highest ranked SNP for each trait was marginal (5-6%) over a common set of evenly spaced SNP when at least 3,000 SNP were used. Subsets containing 3,000 SNP provided more than 90% of the accuracy that could be achieved with a high-density assay for cows, and 80% of the high-density assay for young bulls. Conclusions Accurate genomic evaluation of the broader bull and cow population can be achieved with a single genotyping assays containing ~ 3,000 to 5,000 evenly spaced SNP. PMID:20950478
NASA Astrophysics Data System (ADS)
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
An integrated use of topography with RSI in gully mapping, Shandong Peninsula, China.
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed.
An Integrated Use of Topography with RSI in Gully Mapping, Shandong Peninsula, China
He, Fuhong; Wang, Tao; Gu, Lijuan; Li, Tao; Jiang, Weiguo; Shao, Hongbo
2014-01-01
Taking the Quickbird optical satellite imagery of the small watershed of Beiyanzigou valley of Qixia city, Shandong province, as the study data, we proposed a new method by using a fused image of topography with remote sensing imagery (RSI) to achieve a high precision interpretation of gully edge lines. The technique first transformed remote sensing imagery into HSV color space from RGB color space. Then the slope threshold values of gully edge line and gully thalweg were gained through field survey and the slope data were segmented using thresholding, respectively. Based on the fused image in combination with gully thalweg thresholding vectors, the gully thalweg thresholding vectors were amended. Lastly, the gully edge line might be interpreted based on the amended gully thalweg vectors, fused image, gully edge line thresholding vectors, and slope data. A testing region was selected in the study area to assess the accuracy. Then accuracy assessment of the gully information interpreted by both interpreting remote sensing imagery only and the fused image was performed using the deviation, kappa coefficient, and overall accuracy of error matrix. Compared with interpreting remote sensing imagery only, the overall accuracy and kappa coefficient are increased by 24.080% and 264.364%, respectively. The average deviations of gully head and gully edge line are reduced by 60.448% and 67.406%, respectively. The test results show the thematic and the positional accuracy of gully interpreted by new method are significantly higher. Finally, the error sources for interpretation accuracy by the two methods were analyzed. PMID:25302333
Fluxgate magnetometer offset vector determination by the 3D mirror mode method
NASA Astrophysics Data System (ADS)
Plaschke, F.; Goetz, C.; Volwerk, M.; Richter, I.; Frühauff, D.; Narita, Y.; Glassmeier, K.-H.; Dougherty, M. K.
2017-07-01
Fluxgate magnetometers on-board spacecraft need to be regularly calibrated in flight. In low fields, the most important calibration parameters are the three offset vector components, which represent the magnetometer measurements in vanishing ambient magnetic fields. In case of three-axis stabilized spacecraft, a few methods exist to determine offsets: (I) by analysis of Alfvénic fluctuations present in the pristine interplanetary magnetic field, (II) by rolling the spacecraft around at least two axes, (III) by cross-calibration against measurements from electron drift instruments or absolute magnetometers, and (IV) by taking measurements in regions of well-known magnetic fields, e.g. cometary diamagnetic cavities. In this paper, we introduce a fifth option, the 3-dimensional (3D) mirror mode method, by which 3D offset vectors can be determined using magnetic field measurements of highly compressional waves, e.g. mirror modes in the Earth's magnetosheath. We test the method by applying it to magnetic field data measured by the following: the Time History of Events and Macroscale Interactions during Substorms-C spacecraft in the terrestrial magnetosheath, the Cassini spacecraft in the Jovian magnetosheath and the Rosetta spacecraft in the vicinity of comet 67P/Churyumov-Gerasimenko. The tests reveal that the achievable offset accuracies depend on the ambient magnetic field strength (lower strength meaning higher accuracy), on the length of the underlying data interval (more data meaning higher accuracy) and on the stability of the offset that is to be determined.
Natsios, Athanasios; Vezakis, Antonios; Kaparos, Georgios; Fragulidis, Georgios; Karakostas, Nikolaos; Kouskouni, Evangelia; Logothetis, Emmanouil; Polydorou, Andreas
2015-01-01
Serum and bile tumor markers are under intense scrutiny for the diagnosis of malignant disease. The purpose of our study was to report the usefulness of serum and bile tumor markers for the discrimination between benign and malignant pancreatobiliary diseases. Between March 2010 and May 2013, 95 patients with obstructive jaundice or history of biliary obstruction, were included in the study. During ERCP, bile samples were obtained for measurement of tumor markers CEA, CA19- 9, CA125, CA72-4 and CA242. Serum samples were taken before ERCP for the same measurements. The patients were divided into two groups: patients with malignant disease and patients with benign disease. Serum tumor marker levels were significantly higher in patients with malignant disease. Serum CA242 and CA19-9 exhibited the highest diagnostic accuracy (76.8% and 73.7%, respectively). CA125 and CA72-4 levels in bile samples were significantly higher in patients with malignant disease. Bile CA125, CEA and CA72-4 achieved the best diagnostic accuracy (69, 65 and 65), respectively). The combined detection of CA19-9, CA242 in serum and CA125, CA72-4 in bile along with total bilirubin levels, showed the best diagnostic accuracy (81%). Serum and bile tumor markers, when studied alone, lack the diagnostic yield to discriminate benign from malignant pancreatobiliary diseases. In cases of diagnostic dilemmas the combination of serum and bile markers might be helpful.
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
NASA Astrophysics Data System (ADS)
Riera, Marc; Mardirossian, Narbe; Bajaj, Pushp; Götz, Andreas W.; Paesani, Francesco
2017-10-01
This study presents the extension of the MB-nrg (Many-Body energy) theoretical/computational framework of transferable potential energy functions (PEFs) for molecular simulations of alkali metal ion-water systems. The MB-nrg PEFs are built upon the many-body expansion of the total energy and include the explicit treatment of one-body, two-body, and three-body interactions, with all higher-order contributions described by classical induction. This study focuses on the MB-nrg two-body terms describing the full-dimensional potential energy surfaces of the M+(H2O) dimers, where M+ = Li+, Na+, K+, Rb+, and Cs+. The MB-nrg PEFs are derived entirely from "first principles" calculations carried out at the explicitly correlated coupled-cluster level including single, double, and perturbative triple excitations [CCSD(T)-F12b] for Li+ and Na+ and at the CCSD(T) level for K+, Rb+, and Cs+. The accuracy of the MB-nrg PEFs is systematically assessed through an extensive analysis of interaction energies, structures, and harmonic frequencies for all five M+(H2O) dimers. In all cases, the MB-nrg PEFs are shown to be superior to both polarizable force fields and ab initio models based on density functional theory. As previously demonstrated for halide-water dimers, the MB-nrg PEFs achieve higher accuracy by correctly describing short-range quantum-mechanical effects associated with electron density overlap as well as long-range electrostatic many-body interactions.
Monitoring and regulation of learning in medical education: the need for predictive cues.
de Bruin, Anique B H; Dunlosky, John; Cavalcanti, Rodrigo B
2017-06-01
Being able to accurately monitor learning activities is a key element in self-regulated learning in all settings, including medical schools. Yet students' ability to monitor their progress is often limited, leading to inefficient use of study time. Interventions that improve the accuracy of students' monitoring can optimise self-regulated learning, leading to higher achievement. This paper reviews findings from cognitive psychology and explores potential applications in medical education, as well as areas for future research. Effective monitoring depends on students' ability to generate information ('cues') that accurately reflects their knowledge and skills. The ability of these 'cues' to predict achievement is referred to as 'cue diagnosticity'. Interventions that improve the ability of students to elicit predictive cues typically fall into two categories: (i) self-generation of cues and (ii) generation of cues that is delayed after self-study. Providing feedback and support is useful when cues are predictive but may be too complex to be readily used. Limited evidence exists about interventions to improve the accuracy of self-monitoring among medical students or trainees. Developing interventions that foster use of predictive cues can enhance the accuracy of self-monitoring, thereby improving self-study and clinical reasoning. First, insight should be gained into the characteristics of predictive cues used by medical students and trainees. Next, predictive cue prompts should be designed and tested to improve monitoring and regulation of learning. Finally, the use of predictive cues should be explored in relation to teaching and learning clinical reasoning. Improving self-regulated learning is important to help medical students and trainees efficiently acquire knowledge and skills necessary for clinical practice. Interventions that help students generate and use predictive cues hold the promise of improved self-regulated learning and achievement. This framework is applicable to learning in several areas, including the development of clinical reasoning. © 2017 The Authors Medical Education published by Association for the Study of Medical Education and John Wiley & Sons Ltd.
A high-voltage supply used on miniaturized RLG
NASA Astrophysics Data System (ADS)
Miao, Zhifei; Fan, Mingming; Wang, Yuepeng; Yin, Yan; Wang, Dongmei
2016-01-01
A high voltage power supply used in laser gyro is proposed in this paper. The power supply which uses a single DC 15v input and fly-back topology is adopted in the main circuit. The output of the power supply achieve high to 3.3kv voltage in order to light the RLG. The PFM control method is adopted to realize the rapid switching between the high voltage state and the maintain state. The resonant chip L6565 is used to achieve the zero voltage switching(ZVS), so the consumption is reduced and the power efficiency is improved more than 80%. A special circuit is presented in the control portion to ensure symmetry of the two RLG's arms current. The measured current accuracy is higher than 5‰ and the current symmetry of the two RLG's arms up to 99.2%.
NASA Astrophysics Data System (ADS)
Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.
2010-07-01
European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
Genotyping by sequencing for genomic prediction in a soybean breeding population.
Jarquín, Diego; Kocak, Kyle; Posadas, Luis; Hyma, Katie; Jedlicka, Joseph; Graef, George; Lorenz, Aaron
2014-08-29
Advances in genotyping technology, such as genotyping by sequencing (GBS), are making genomic prediction more attractive to reduce breeding cycle times and costs associated with phenotyping. Genomic prediction and selection has been studied in several crop species, but no reports exist in soybean. The objectives of this study were (i) evaluate prospects for genomic selection using GBS in a typical soybean breeding program and (ii) evaluate the effect of GBS marker selection and imputation on genomic prediction accuracy. To achieve these objectives, a set of soybean lines sampled from the University of Nebraska Soybean Breeding Program were genotyped using GBS and evaluated for yield and other agronomic traits at multiple Nebraska locations. Genotyping by sequencing scored 16,502 single nucleotide polymorphisms (SNPs) with minor-allele frequency (MAF) > 0.05 and percentage of missing values ≤ 5% on 301 elite soybean breeding lines. When SNPs with up to 80% missing values were included, 52,349 SNPs were scored. Prediction accuracy for grain yield, assessed using cross validation, was estimated to be 0.64, indicating good potential for using genomic selection for grain yield in soybean. Filtering SNPs based on missing data percentage had little to no effect on prediction accuracy, especially when random forest imputation was used to impute missing values. The highest accuracies were observed when random forest imputation was used on all SNPs, but differences were not significant. A standard additive G-BLUP model was robust; modeling additive-by-additive epistasis did not provide any improvement in prediction accuracy. The effect of training population size on accuracy began to plateau around 100, but accuracy steadily climbed until the largest possible size was used in this analysis. Including only SNPs with MAF > 0.30 provided higher accuracies when training populations were smaller. Using GBS for genomic prediction in soybean holds good potential to expedite genetic gain. Our results suggest that standard additive G-BLUP models can be used on unfiltered, imputed GBS data without loss in accuracy.
Pham, Quang Duc; Kusumi, Yuichi; Hasegawa, Satoshi; Hayasaki, Yoshio
2012-10-01
We propose a new method for three-dimensional (3D) position measurement of nanoparticles using an in-line digital holographic microscope. The method improves the signal-to-noise ratio of the amplitude of the interference fringes to achieve higher accuracy in the position measurement by increasing weak scattered light from a nanoparticle relative to the reference light by using a low spatial frequency attenuation filter. We demonstrated the improvements of signal-to-noise ratio of the optical system and contrast of the interference fringes, allowing the 3D positions of nanoparticles to be determined more precisely.
Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G
2008-11-24
An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel; Wang, Z. J.
2004-01-01
A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cester, D.; Lunardon, M.; Stevanato, L.
2015-07-01
MODES SNM project aimed to carry out technical research in order to develop a prototype for a mobile, modular detection system for radioactive sources and Special Nuclear Materials (SNM). Its main goal was to deliver a tested prototype of a modular mobile system capable of passively detecting weak or shielded radioactive sources with accuracy higher than that of currently available systems. By the end of the project all the objectives have been successfully achieved. Results from the laboratory commissioning and the field tests will be presented. (authors)
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
A Brain-Machine Interface Based on ERD/ERS for an Upper-Limb Exoskeleton Control.
Tang, Zhichuan; Sun, Shouqian; Zhang, Sanyuan; Chen, Yumiao; Li, Chao; Chen, Shi
2016-12-02
To recognize the user's motion intention, brain-machine interfaces (BMI) usually decode movements from cortical activity to control exoskeletons and neuroprostheses for daily activities. The aim of this paper is to investigate whether self-induced variations of the electroencephalogram (EEG) can be useful as control signals for an upper-limb exoskeleton developed by us. A BMI based on event-related desynchronization/synchronization (ERD/ERS) is proposed. In the decoder-training phase, we investigate the offline classification performance of left versus right hand and left hand versus both feet by using motor execution (ME) or motor imagery (MI). The results indicate that the accuracies of ME sessions are higher than those of MI sessions, and left hand versus both feet paradigm achieves a better classification performance, which would be used in the online-control phase. In the online-control phase, the trained decoder is tested in two scenarios (wearing or without wearing the exoskeleton). The MI and ME sessions wearing the exoskeleton achieve mean classification accuracy of 84.29% ± 2.11% and 87.37% ± 3.06%, respectively. The present study demonstrates that the proposed BMI is effective to control the upper-limb exoskeleton, and provides a practical method by non-invasive EEG signal associated with human natural behavior for clinical applications.
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
Hao, Pengyu; Wang, Li; Niu, Zheng
2015-01-01
A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597
SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.
Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver
2012-07-15
In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.
Cabeda, Estêvan Vieira; Falcão, Andréa Maria Gomes; Soares, José; Rochitte, Carlos Eduardo; Nomura, César Higa; Ávila, Luiz Francisco Rodrigues; Parga, José Rodrigues
2015-12-01
Functional tests have limited accuracy for identifying myocardial ischemia in patients with left bundle branch block (LBBB). To assess the diagnostic accuracy of dipyridamole-stress myocardial computed tomography perfusion (CTP) by 320-detector CT in patients with LBBB using invasive quantitative coronary angiography (QCA) (stenosis ≥ 70%) as reference; to investigate the advantage of adding CTP to coronary computed tomography angiography (CTA) and compare the results with those of single photon emission computed tomography (SPECT) myocardial perfusion scintigraphy. Thirty patients with LBBB who had undergone SPECT for the investigation of coronary artery disease were referred for stress tomography. Independent examiners performed per-patient and per-coronary territory assessments. All patients gave written informed consent to participate in the study that was approved by the institution's ethics committee. The patients' mean age was 62 ± 10 years. The mean dose of radiation for the tomography protocol was 9.3 ± 4.6 mSv. With regard to CTP, the per-patient values for sensitivity, specificity, positive and negative predictive values, and accuracy were 86%, 81%, 80%, 87%, and 83%, respectively (p = 0.001). The per-territory values were 63%, 86%, 65%, 84%, and 79%, respectively (p < 0.001). In both analyses, the addition of CTP to CTA achieved higher diagnostic accuracy for detecting myocardial ischemia than SPECT (p < 0.001). The use of the stress tomography protocol is feasible and has good diagnostic accuracy for assessing myocardial ischemia in patients with LBBB.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
Assessing Complex Learning Objectives through Analytics
NASA Astrophysics Data System (ADS)
Horodyskyj, L.; Mead, C.; Buxner, S.; Semken, S. C.; Anbar, A. D.
2016-12-01
A significant obstacle to improving the quality of education is the lack of easy-to-use assessments of higher-order thinking. Most existing assessments focus on recall and understanding questions, which demonstrate lower-order thinking. Traditionally, higher-order thinking is assessed with practical tests and written responses, which are time-consuming to analyze and are not easily scalable. Computer-based learning environments offer the possibility of assessing such learning outcomes based on analysis of students' actions within an adaptive learning environment. Our fully online introductory science course, Habitable Worlds, uses an intelligent tutoring system that collects and responds to a range of behavioral data, including actions within the keystone project. This central project is a summative, game-like experience in which students synthesize and apply what they have learned throughout the course to identify and characterize a habitable planet from among hundreds of stars. Student performance is graded based on completion and accuracy, but two additional properties can be utilized to gauge higher-order thinking: (1) how efficient a student is with the virtual currency within the project and (2) how many of the optional milestones a student reached. In the project, students can use the currency to check their work and "unlock" convenience features. High-achieving students spend close to the minimum amount required to reach these goals, indicating a high-level of concept mastery and efficient methodology. Average students spend more, indicating effort, but lower mastery. Low-achieving students were more likely to spend very little, which indicates low effort. Differences on these metrics were statistically significant between all three of these populations. We interpret this as evidence that high-achieving students develop and apply efficient problem-solving skills as compared to lower-achieving student who use more brute-force approaches.
Relative Navigation of Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Long, Anne; Kelbel, David; Lee, Taesul; Leung, Dominic; Carpenter, J. Russell; Grambling, Cheryl
2002-01-01
This paper compares autonomous relative navigation performance for formations in eccentric, medium and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS), crosslink, and celestial object measurements. For close formations, the relative navigation accuracy is highly dependent on the magnitude of the uncorrelated measurement errors. A relative navigation position accuracy of better than 10 centimeters root-mean-square (RMS) can be achieved for medium-altitude formations that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 15 meters RMS can be achieved for high-altitude formations that have sparse tracking of the GPS signals. The addition of crosslink measurements can significantly improve relative navigation accuracy for formations that use sparse GPS tracking or celestial object measurements for absolute navigation.
Double sided grating fabrication for high energy X-ray phase contrast imaging
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick; ...
2018-04-19
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
New and updated tests of print exposure and reading abilities in college students
Acheson, Daniel J.; Wells, Justine B.; MacDonald, Maryellen C.
2010-01-01
The relationship between print exposure and measures of reading skill was examined in college students (N = 99, 58 female; mean age = 20.3 years). Print exposure was measured with several new self-reports of reading and writing habits, as well as updated versions of the Author Recognition Test and the Magazine Recognition Test (Stanovich & West, 1989). Participants completed a sentence comprehension task with syntactically complex sentences, and reading times and comprehension accuracy were measured. An additional measure of reading skill was provided by participants’ scores on the verbal portions of the ACT, a standardized achievement test. Higher levels of print exposure were associated with higher sentence processing abilities and superior verbal ACT performance. The relative merits of different print exposure assessments are discussed. PMID:18411551
Double sided grating fabrication for high energy X-ray phase contrast imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollowell, Andrew E.; Arrington, Christian L.; Finnegan, Patrick
State of the art grating fabrication currently limits the maximum source energy that can be used in lab based x-ray phase contrast imaging (XPCI) systems. In order to move to higher source energies, and image high density materials or image through encapsulating barriers, new grating fabrication methods are needed. In this work we have analyzed a new modality for grating fabrication that involves precision alignment of etched gratings on both sides of a substrate, effectively doubling the thickness of the grating. Furthermore, we have achieved a front-to-backside feature alignment accuracy of 0.5 µm demonstrating a methodology that can be appliedmore » to any grating fabrication approach extending the attainable aspect ratios allowing higher energy lab based XPCI systems.« less
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
A Novel Energy-Efficient Approach for Human Activity Recognition
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin
2017-01-01
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560
Di-codon Usage for Gene Classification
NASA Astrophysics Data System (ADS)
Nguyen, Minh N.; Ma, Jianmin; Fogel, Gary B.; Rajapakse, Jagath C.
Classification of genes into biologically related groups facilitates inference of their functions. Codon usage bias has been described previously as a potential feature for gene classification. In this paper, we demonstrate that di-codon usage can further improve classification of genes. By using both codon and di-codon features, we achieve near perfect accuracies for the classification of HLA molecules into major classes and sub-classes. The method is illustrated on 1,841 HLA sequences which are classified into two major classes, HLA-I and HLA-II. Major classes are further classified into sub-groups. A binary SVM using di-codon usage patterns achieved 99.95% accuracy in the classification of HLA genes into major HLA classes; and multi-class SVM achieved accuracy rates of 99.82% and 99.03% for sub-class classification of HLA-I and HLA-II genes, respectively. Furthermore, by combining codon and di-codon usages, the prediction accuracies reached 100%, 99.82%, and 99.84% for HLA major class classification, and for sub-class classification of HLA-I and HLA-II genes, respectively.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
Vidić, Igor; Egnell, Liv; Jerome, Neil P; Teruel, Jose R; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F; Goa, Pål Erik
2018-05-01
Diffusion-weighted MRI (DWI) is currently one of the fastest developing MRI-based techniques in oncology. Histogram properties from model fitting of DWI are useful features for differentiation of lesions, and classification can potentially be improved by machine learning. To evaluate classification of malignant and benign tumors and breast cancer subtypes using support vector machine (SVM). Prospective. Fifty-one patients with benign (n = 23) and malignant (n = 28) breast tumors (26 ER+, whereof six were HER2+). Patients were imaged with DW-MRI (3T) using twice refocused spin-echo echo-planar imaging with echo time / repetition time (TR/TE) = 9000/86 msec, 90 × 90 matrix size, 2 × 2 mm in-plane resolution, 2.5 mm slice thickness, and 13 b-values. Apparent diffusion coefficient (ADC), relative enhanced diffusivity (RED), and the intravoxel incoherent motion (IVIM) parameters diffusivity (D), pseudo-diffusivity (D*), and perfusion fraction (f) were calculated. The histogram properties (median, mean, standard deviation, skewness, kurtosis) were used as features in SVM (10-fold cross-validation) for differentiation of lesions and subtyping. Accuracies of the SVM classifications were calculated to find the combination of features with highest prediction accuracy. Mann-Whitney tests were performed for univariate comparisons. For benign versus malignant tumors, univariate analysis found 11 histogram properties to be significant differentiators. Using SVM, the highest accuracy (0.96) was achieved from a single feature (mean of RED), or from three feature combinations of IVIM or ADC. Combining features from all models gave perfect classification. No single feature predicted HER2 status of ER + tumors (univariate or SVM), although high accuracy (0.90) was achieved with SVM combining several features. Importantly, these features had to include higher-order statistics (kurtosis and skewness), indicating the importance to account for heterogeneity. Our findings suggest that SVM, using features from a combination of diffusion models, improves prediction accuracy for differentiation of benign versus malignant breast tumors, and may further assist in subtyping of breast cancer. 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2018;47:1205-1216. © 2017 International Society for Magnetic Resonance in Medicine.
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
A novel pendulum test for measuring roller chain efficiency
NASA Astrophysics Data System (ADS)
Wragge-Morley, R.; Yon, J.; Lock, R.; Alexander, B.; Burgess, S.
2018-07-01
This paper describes a novel pendulum decay test for determining the transmission efficiency of chain drives. The test involves releasing a pendulum with an initial potential energy and measuring its decaying oscillations: under controlled conditions the decay reveals the losses in the transmission to a high degree of accuracy. The main advantage over motorised rigs is that there are significantly fewer sources of friction and inertia and hence measurement error. The pendulum rigs have an accuracy around 0.6% for the measurement of the coefficient of friction, giving an accuracy of transmission efficiency measurement around 0.012%. A theoretical model of chain friction combined with the equations of motion enables the coefficient of friction to be determined from the decay rate of pendulum velocity. The pendulum rigs operate at relatively low speeds. However, they allow an accurate determination of the coefficient of friction to estimate transmission efficiency at higher speeds. The pendulum rig revealed a previously undetected rocking behaviour in the chain links at very small articulation angles. In this regime, the link interfaces were observed to roll against one another rather than slide. This observation indicates that a very high-efficiency transmission can be achieved if the articulation angle is very low.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.
2009-04-01
A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji
2012-06-01
To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
The accuracy of transvaginal sonography to detect endometriosis cyst
NASA Astrophysics Data System (ADS)
Diantika, M.; Gunardi, E. R.
2017-08-01
Endometriosis is common in women of reproductive age. Late diagnosis is still the main concern. Currently, noninvasive diagnostic testing, such as transvaginal sonography, is recommended. The aim of the current study was to evaluate the accuracy of transvaginal sonography in diagnosing endometrial cysts in patients in Cipto Mangunkusumo Hospital, Jakarta, Indonesia. This diagnostic study was carried out at Cipto Mangunkusumo Hospital between January 2014 and June 2015. Outpatients suspected have an endometrial cyst based on the patient history and a clinical examination was recruited. The patients were then evaluated using transvaginal sonography by an experienced sonologist, according to the research protocol. The gold standard test was a histological finding in the removed surgical mass. Ninety-eight patients were analyzed. An endometrial cyst was confirmed by histology in 85 patients (87%). The accuracy, sensitivity, specificity, positive predictive value and negative predictive value of transvaginal sonography was established to be 85% (a range of 71-99%), 93%, 77%, 96%, and 63%, respectively. A significantly higher area under the curve was identified using transvaginal sonogpraphy compared to that achieved with a clinical examination alone (85% versus 79%). Transvaginal sonography was useful in diagnosing endometrial cysts in outpatients and is recommended in daily clinical practice.
Samuel, Oluwarotimi Williams; Geng, Yanjuan; Li, Xiangxin; Li, Guanglin
2017-10-28
To control multiple degrees of freedom (MDoF) upper limb prostheses, pattern recognition (PR) of electromyogram (EMG) signals has been successfully applied. This technique requires amputees to provide sufficient EMG signals to decode their limb movement intentions (LMIs). However, amputees with neuromuscular disorder/high level amputation often cannot provide sufficient EMG control signals, and thus the applicability of the EMG-PR technique is limited especially to this category of amputees. As an alternative approach, electroencephalograph (EEG) signals recorded non-invasively from the brain have been utilized to decode the LMIs of humans. However, most of the existing EEG based limb movement decoding methods primarily focus on identifying limited classes of upper limb movements. In addition, investigation on EEG feature extraction methods for the decoding of multiple classes of LMIs has rarely been considered. Therefore, 32 EEG feature extraction methods (including 12 spectral domain descriptors (SDDs) and 20 time domain descriptors (TDDs)) were used to decode multiple classes of motor imagery patterns associated with different upper limb movements based on 64-channel EEG recordings. From the obtained experimental results, the best individual TDD achieved an accuracy of 67.05 ± 3.12% as against 87.03 ± 2.26% for the best SDD. By applying a linear feature combination technique, an optimal set of combined TDDs recorded an average accuracy of 90.68% while that of the SDDs achieved an accuracy of 99.55% which were significantly higher than those of the individual TDD and SDD at p < 0.05. Our findings suggest that optimal feature set combination would yield a relatively high decoding accuracy that may improve the clinical robustness of MDoF neuroprosthesis. The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.
ERIC Educational Resources Information Center
Kaiser, Johanna; Südkamp, Anna; Möller, Jens
2017-01-01
Teachers' judgments of students' academic achievement are not only affected by the achievement themselves but also by several other characteristics such as ethnicity, gender, and minority status. In real-life classrooms, achievement and further characteristics are often confounded. We disentangled achievement, ethnicity and minority status and…
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
Mind the gap: Increased inter-letter spacing as a means of improving reading performance.
Dotan, Shahar; Katzir, Tami
2018-06-05
Theeffects of text display, specificallywithin-word spacing, on children's reading at different developmental levels has barely been investigated.This study explored the influence of manipulating inter-letter spacing on reading performance (accuracy and rate) of beginner Hebrew readers compared with older readers and of low-achieving readers compared with age-matched high-achieving readers.A computer-based isolated word reading task was performed by 132 first and third graders. Words were displayed under two spacing conditions: standard spacing (100%) and increased spacing (150%). Words were balanced for length and frequency across conditions. Results indicated that increased spacing contributed to reading accuracy without affecting reading rate. Interestingly, all first graders benefitted fromthe spaced condition. Thiseffect was found only in long words but not in short words. Among third graders, only low-achieving readers gained in accuracy fromthespaced condition. Thetheoretical and clinical effects ofthefindings are discussed. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng
2013-09-01
Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.
NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.
Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua
2013-01-01
Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp.
An Improved Strong Tracking Cubature Kalman Filter for GPS/INS Integrated Navigation Systems.
Feng, Kaiqiang; Li, Jie; Zhang, Xi; Zhang, Xiaoming; Shen, Chong; Cao, Huiliang; Yang, Yanyu; Liu, Jun
2018-06-12
The cubature Kalman filter (CKF) is widely used in the application of GPS/INS integrated navigation systems. However, its performance may decline in accuracy and even diverge in the presence of process uncertainties. To solve the problem, a new algorithm named improved strong tracking seventh-degree spherical simplex-radial cubature Kalman filter (IST-7thSSRCKF) is proposed in this paper. In the proposed algorithm, the effect of process uncertainty is mitigated by using the improved strong tracking Kalman filter technique, in which the hypothesis testing method is adopted to identify the process uncertainty and the prior state estimate covariance in the CKF is further modified online according to the change in vehicle dynamics. In addition, a new seventh-degree spherical simplex-radial rule is employed to further improve the estimation accuracy of the strong tracking cubature Kalman filter. In this way, the proposed comprehensive algorithm integrates the advantage of 7thSSRCKF’s high accuracy and strong tracking filter’s strong robustness against process uncertainties. The GPS/INS integrated navigation problem with significant dynamic model errors is utilized to validate the performance of proposed IST-7thSSRCKF. Results demonstrate that the improved strong tracking cubature Kalman filter can achieve higher accuracy than the existing CKF and ST-CKF, and is more robust for the GPS/INS integrated navigation system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Basedmore » on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.« less
Effectiveness of Link Prediction for Face-to-Face Behavioral Networks
Tsugawa, Sho; Ohsaki, Hiroyuki
2013-01-01
Research on link prediction for social networks has been actively pursued. In link prediction for a given social network obtained from time-windowed observation, new link formation in the network is predicted from the topology of the obtained network. In contrast, recent advances in sensing technology have made it possible to obtain face-to-face behavioral networks, which are social networks representing face-to-face interactions among people. However, the effectiveness of link prediction techniques for face-to-face behavioral networks has not yet been explored in depth. To clarify this point, here we investigate the accuracy of conventional link prediction techniques for networks obtained from the history of face-to-face interactions among participants at an academic conference. Our findings were (1) that conventional link prediction techniques predict new link formation with a precision of 0.30–0.45 and a recall of 0.10–0.20, (2) that prolonged observation of social networks often degrades the prediction accuracy, (3) that the proposed decaying weight method leads to higher prediction accuracy than can be achieved by observing all records of communication and simply using them unmodified, and (4) that the prediction accuracy for face-to-face behavioral networks is relatively high compared to that for non-social networks, but not as high as for other types of social networks. PMID:24339956
Foliar and woody materials discriminated using terrestrial LiDAR in a mixed natural forest
NASA Astrophysics Data System (ADS)
Zhu, Xi; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Niemann, K. Olaf; Liu, Jing; Shi, Yifang; Wang, Tiejun
2018-02-01
Separation of foliar and woody materials using remotely sensed data is crucial for the accurate estimation of leaf area index (LAI) and woody biomass across forest stands. In this paper, we present a new method to accurately separate foliar and woody materials using terrestrial LiDAR point clouds obtained from ten test sites in a mixed forest in Bavarian Forest National Park, Germany. Firstly, we applied and compared an adaptive radius near-neighbor search algorithm with a fixed radius near-neighbor search method in order to obtain both radiometric and geometric features derived from terrestrial LiDAR point clouds. Secondly, we used a random forest machine learning algorithm to classify foliar and woody materials and examined the impact of understory and slope on the classification accuracy. An average overall accuracy of 84.4% (Kappa = 0.75) was achieved across all experimental plots. The adaptive radius near-neighbor search method outperformed the fixed radius near-neighbor search method. The classification accuracy was significantly higher when the combination of both radiometric and geometric features was utilized. The analysis showed that increasing slope and understory coverage had a significant negative effect on the overall classification accuracy. Our results suggest that the utilization of the adaptive radius near-neighbor search method coupling both radiometric and geometric features has the potential to accurately discriminate foliar and woody materials from terrestrial LiDAR data in a mixed natural forest.
Current Status of Astrometry Satellite missions in Japan: JASMINE project series
NASA Astrophysics Data System (ADS)
Yano, T.; Gouda, N.; Kobayashi, Y.; Tsujimoto, T.; Hatsutori, Y.; Murooka, J.; Niwa, Y.; Yamada, Y.
Astrometry satellites have common technological issues. (A) Astrometry satellites are required to measure the positions of stars with high accuracy from the huge amount of data during the observational period. (B) The high stabilization of the thermal environment in the telescope is required. (C) The attitude-pointing stability of these satellites with sub-pixel accuracy is also required. Measurement of the positions of stars from a huge amount of data is the essence of astrometry. It is needed to exclude the systematic errors adequately for each image of stars in order to obtain the accurate positions. We have carried out a centroiding experiment for determining the positions of stars from about 10 000 image data. The following two points are important issues for the mission system of JASMINE in order to achieve our aim. For the small-JASMINE, we require the thermal stabilization of the telescope in order to obtain high astrometric accuracy of about 10 micro-arcsec. In order to accomplish a measurement of positions of stars with high accuracy, we must make a model of the distortion of the image on the focal plane with the accuracy of less than 0.1 nm. We have investigated numerically that the above requirement is achieved if the thermal variation is within about 1 K / 0.75 h. We also require the accuracy of the attitude-pointing stability of about 200 mas / 7 s. The utilization of the Tip-tilt mirror will make it possible to achieve such a stable pointing.
Training and quality assurance with the Structured Clinical Interview for DSM-IV (SCID-I/P).
Ventura, J; Liberman, R P; Green, M F; Shaner, A; Mintz, J
1998-06-15
Accuracy in psychiatric diagnosis is critical for evaluating the suitability of the subjects for entry into research protocols and for establishing comparability of findings across study sites. However, training programs in the use of diagnostic instruments for research projects are not well systematized. Furthermore, little information has been published on the maintenance of interrater reliability of diagnostic assessments. At the UCLA Research Center for Major Mental Illnesses, a Training and Quality Assurance Program for SCID interviewers was used to evaluate interrater reliability and diagnostic accuracy. Although clinically experienced interviewers achieved better interrater reliability and overall diagnostic accuracy than neophyte interviewers, both groups were able to achieve and maintain high levels of interrater reliability, diagnostic accuracy, and interviewer skill. At the first quality assurance check after training, there were no significant differences between experienced and neophyte interviewers in interrater reliability or diagnostic accuracy. Standardization of training and quality assurance procedures within and across research projects may make research findings from study sites more comparable.
Huynh, Roy; Ip, Matthew; Chang, Jeff; Haifer, Craig; Leong, Rupert W.
2018-01-01
Background and study aims Confocal laser endomicroscopy (CLE) allows mucosal barrier defects along the intestinal epithelium to be visualized in vivo during endoscopy. Training in CLE interpretation can be achieved didactically or through self-directed learning. This study aimed to compare the effectiveness of expert-led didactic with self-directed audiovisual teaching for training inexperienced analysts on how to recognize mucosal barrier defects on endoscope-based CLE (eCLE). Materials and methods This randomized controlled study involved trainee analysts who were taught how to recognize mucosal barrier defects on eCLE either didactically or through an audiovisual clip. After being trained, they evaluated 6 sets of 30 images. Image evaluation required the trainees to determine whether specific features of barrier dysfunction were present or not. Trainees in the didactic group engaged in peer discussion and received feedback after each set while this did not happen in the self-directed group. Accuracy, sensitivity, and specificity of both groups were compared. Results Trainees in the didactic group achieved a higher overall accuracy (87.5 % vs 85.0 %, P = 0.002) and sensitivity (84.5 % vs 80.4 %, P = 0.002) compared to trainees in the self-directed group. Interobserver agreement was higher in the didactic group (k = 0.686, 95 % CI 0.680 – 0.691, P < 0.001) than in the self-directed group (k = 0.566, 95 % CI 0.559 – 0.573, P < 0.001). Confidence (OR 6.48, 95 % CI 5.35 – 7.84, P < 0.001) and good image quality (OR 2.58, 95 % CI 2.17 – 2.82, P < 0.001) were positive predictors of accuracy. Conclusion Expert-led didactic training is more effective than self-directed audiovisual training for teaching inexperienced analysts how to recognize mucosal barrier defects on eCLE. PMID:29344572
Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E
2017-07-01
The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming
2015-08-01
The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.
A flood map based DOI decoding method for block detector: a GATE simulation study.
Shi, Han; Du, Dong; Su, Zhihong; Peng, Qiyu
2014-01-01
Positron Emission Tomography (PET) systems using detectors with Depth of Interaction (DOI) capabilities could achieve higher spatial resolution and better image quality than those without DOI. Up till now, most DOI methods developed are not cost-efficient for a whole body PET system. In this paper, we present a DOI decoding method based on flood map for low-cost conventional block detector with four-PMT readout. Using this method, the DOI information can be directly extracted from the DOI-related crystal spot deformation in the flood map. GATE simulations are then carried out to validate the method, confirming a DOI sorting accuracy of 85.27%. Therefore, we conclude that this method has the potential to be applied in conventional detectors to achieve a reasonable DOI measurement without dramatically increasing their complexity and cost of an entire PET system.
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
Development of a 402.5 MHz 140 kW Inductive Output Tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Lawrence Ives; Michael Read, Robert Jackson
2012-05-09
This report contains the results of Phase I of an SBIR to develop a Pulsed Inductive Output Tube (IOT) with 140 kW at 400 MHz for powering H-proton beams. A number of sources, including single beam and multiple beam klystrons, can provide this power, but the IOT provides higher efficiency. Efficiencies exceeding 70% are routinely achieved. The gain is typically limited to approximately 24 dB; however, the availability of highly efficient, solid state drivers reduces the significance of this limitation, particularly at lower frequencies. This program initially focused on developing a 402 MHz IOT; however, the DOE requirement for thismore » device was terminated during the program. The SBIR effort was refocused on improving the IOT design codes to more accurately simulate the time dependent behavior of the input cavity, electron gun, output cavity, and collector. Significant improvement was achieved in modeling capability and simulation accuracy.« less
NASA Technical Reports Server (NTRS)
Simpson, Robert W.
1993-01-01
This presentation outlines a concept for an adaptive, interactive decision support system to assist controllers at a busy airport in achieving efficient use of multiple runways. The concept is being implemented as a computer code called FASA (Final Approach Spacing for Aircraft), and will be tested and demonstrated in ATCSIM, a high fidelity simulation of terminal area airspace and airport surface operations. Objectives are: (1) to provide automated cues to assist controllers in the sequencing and spacing of landing and takeoff aircraft; (2) to provide the controller with a limited ability to modify the sequence and spacings between aircraft, and to insert takeoffs and missed approach aircraft in the landing flows; (3) to increase spacing accuracy using more complex and precise separation criteria while reducing controller workload; and (4) achieve higher operational takeoff and landing rates on multiple runways in poor visibility.
NASA Technical Reports Server (NTRS)
Luthcke, Scott; Rowlands, David; Lemoine, Frank; Zelensky, Nikita; Beckley, Brian; Klosko, Steve; Chinn, Doug
2006-01-01
Although satellite altimetry has been around for thirty years, the last fifteen beginning with the launch of TOPEX/Poseidon (TP) have yielded an abundance of significant results including: monitoring of ENS0 events, detection of internal tides, determination of accurate global tides, unambiguous delineation of Rossby waves and their propagation characteristics, accurate determination of geostrophic currents, and a multi-decadal time series of mean sea level trend and dynamic ocean topography variability. While the high level of accuracy being achieved is a result of both instrument maturity and the quality of models and correction algorithms applied to the data, improving the quality of the Climate Data Records produced from altimetry is highly dependent on concurrent progress being made in fields such as orbit determination. The precision orbits form the reference frame from which the radar altimeter observations are made. Therefore, the accuracy of the altimetric mapping is limited to a great extent by the accuracy to which a satellite orbit can be computed. The TP mission represents the first time that the radial component of an altimeter orbit was routinely computed with an accuracy of 2-cm. Recently it has been demonstrated that it is possible to compute the radial component of Jason orbits with an accuracy of better than 1-cm. Additionally, still further improvements in TP orbits are being achieved with new techniques and algorithms largely developed from combined Jason and TP data analysis. While these recent POD achievements are impressive, the new accuracies are now revealing subtle systematic orbit error that manifest as both intra and inter annual ocean topography errors. Additionally the construction of inter-decadal time series of climate data records requires the removal of systematic differences across multiple missions. Current and future efforts must focus on the understanding and reduction of these errors in order to generate a complete and consistent time series of improved orbits across multiple missions and decades required for the most stringent climate-related research. This presentation discusses the POD progress and achievements made over nearly three decades, and presents the future challenges, goals and their impact on altimetric derived ocean sciences.
Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano
2016-09-13
We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.
A calibration method of infrared LVF based spectroradiometer
NASA Astrophysics Data System (ADS)
Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin
2017-10-01
In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.
Sullivan, Edith V.; Brumback, Ty; Tapert, Susan F.; Fama, Rosemary; Prouty, Devin; Brown, Sandra A.; Cummins, Kevin; Thompson, Wesley K.; Colrain, Ian M.; Baker, Fiona C.; De Bellis, Michael D.; Hooper, Stephen R.; Clark, Duncan B.; Chung, Tammy; Nagel, Bonnie J.; Nichols, B. Nolan; Rohlfing, Torsten; Chu, Weiwei; Pohl, Kilian M.; Pfefferbaum, Adolf
2015-01-01
Objective To investigate development of cognitive and motor functions in healthy adolescents and to explore whether hazardous drinking affects the normal developmental course of those functions. Method Participants were 831 adolescents recruited across five United States sites of the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA): 692 met criteria for no/low alcohol exposure, and 139 exceeded drinking thresholds. Cross-sectional, baseline data were collected with computerized and traditional neuropsychological tests assessing eight functional domains expressed as composite scores. General additive modeling evaluated factors potentially modulating performance (age, sex, ethnicity, socioeconomic status, and pubertal developmental stage). Results Older no/low-drinking participants achieved better scores than younger ones on five Accuracy composites (General Ability, Abstraction, Attention, Emotion, and Balance). Speeded responses for Attention, Motor Speed, and General Ability were sensitive to age and pubertal development. The exceeds-threshold group (accounting for age, sex, and other demographic factors) performed significantly below the no/low-drinking group on Balance accuracy and on General Ability, Attention, Episodic Memory, Emotion, and Motor speed scores and showed evidence for faster speed at the expense of accuracy. Delay Discounting performance was consistent with poor impulse control in the younger no/low drinkers and in exceeds-threshold drinkers regardless of age. Conclusions Higher achievement with older age and pubertal stage in General Ability, Abstraction, Attention, Emotion, and Balance suggests continued functional development through adolescence, possibly supported by concurrently maturing frontal, limbic, and cerebellar brain systems. Whether low scores by the exceeds-threshold group resulted from drinking or from other pre-existing factors requires longitudinal study. PMID:26752122
Sullivan, Edith V; Brumback, Ty; Tapert, Susan F; Fama, Rosemary; Prouty, Devin; Brown, Sandra A; Cummins, Kevin; Thompson, Wesley K; Colrain, Ian M; Baker, Fiona C; De Bellis, Michael D; Hooper, Stephen R; Clark, Duncan B; Chung, Tammy; Nagel, Bonnie J; Nichols, B Nolan; Rohlfing, Torsten; Chu, Weiwei; Pohl, Kilian M; Pfefferbaum, Adolf
2016-05-01
To investigate development of cognitive and motor functions in healthy adolescents and to explore whether hazardous drinking affects the normal developmental course of those functions. Participants were 831 adolescents recruited across 5 United States sites of the National Consortium on Alcohol and NeuroDevelopment in Adolescence 692 met criteria for no/low alcohol exposure, and 139 exceeded drinking thresholds. Cross-sectional, baseline data were collected with computerized and traditional neuropsychological tests assessing 8 functional domains expressed as composite scores. General additive modeling evaluated factors potentially modulating performance (age, sex, ethnicity, socioeconomic status, and pubertal developmental stage). Older no/low-drinking participants achieved better scores than younger ones on 5 accuracy composites (general ability, abstraction, attention, emotion, and balance). Speeded responses for attention, motor speed, and general ability were sensitive to age and pubertal development. The exceeds-threshold group (accounting for age, sex, and other demographic factors) performed significantly below the no/low-drinking group on balance accuracy and on general ability, attention, episodic memory, emotion, and motor speed scores and showed evidence for faster speed at the expense of accuracy. Delay Discounting performance was consistent with poor impulse control in the younger no/low drinkers and in exceeds-threshold drinkers regardless of age. Higher achievement with older age and pubertal stage in general ability, abstraction, attention, emotion, and balance suggests continued functional development through adolescence, possibly supported by concurrently maturing frontal, limbic, and cerebellar brain systems. Determination of whether low scores by the exceeds-threshold group resulted from drinking or from other preexisting factors requires longitudinal study. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm 2 . The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control.
Kaiju, Taro; Doi, Keiichi; Yokota, Masashi; Watanabe, Kei; Inoue, Masato; Ando, Hiroshi; Takahashi, Kazutaka; Yoshida, Fumiaki; Hirata, Masayuki; Suzuki, Takafumi
2017-01-01
Electrocorticogram (ECoG) has great potential as a source signal, especially for clinical BMI. Until recently, ECoG electrodes were commonly used for identifying epileptogenic foci in clinical situations, and such electrodes were low-density and large. Increasing the number and density of recording channels could enable the collection of richer motor/sensory information, and may enhance the precision of decoding and increase opportunities for controlling external devices. Several reports have aimed to increase the number and density of channels. However, few studies have discussed the actual validity of high-density ECoG arrays. In this study, we developed novel high-density flexible ECoG arrays and conducted decoding analyses with monkey somatosensory evoked potentials (SEPs). Using MEMS technology, we made 96-channel Parylene electrode arrays with an inter-electrode distance of 700 μm and recording site area of 350 μm2. The arrays were mainly placed onto the finger representation area in the somatosensory cortex of the macaque, and partially inserted into the central sulcus. With electrical finger stimulation, we successfully recorded and visualized finger SEPs with a high spatiotemporal resolution. We conducted offline analyses in which the stimulated fingers and intensity were predicted from recorded SEPs using a support vector machine. We obtained the following results: (1) Very high accuracy (~98%) was achieved with just a short segment of data (~15 ms from stimulus onset). (2) High accuracy (~96%) was achieved even when only a single channel was used. This result indicated placement optimality for decoding. (3) Higher channel counts generally improved prediction accuracy, but the efficacy was small for predictions with feature vectors that included time-series information. These results suggest that ECoG signals with high spatiotemporal resolution could enable greater decoding precision or external device control. PMID:28442997
In vivo precision of conventional and digital methods of obtaining complete-arch dental impressions.
Ender, Andreas; Attin, Thomas; Mehl, Albert
2016-03-01
Digital impression systems have undergone significant development in recent years, but few studies have investigated the accuracy of the technique in vivo, particularly compared with conventional impression techniques. The purpose of this in vivo study was to investigate the precision of conventional and digital methods for complete-arch impressions. Complete-arch impressions were obtained using 5 conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; digitized scannable vinylsiloxanether, VSES-D; and irreversible hydrocolloid, ALG) and 7 digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; Lava COS, LAV; Lava True Definition Scanner, T-Def; 3Shape Trios, TRI; and 3Shape Trios Color, TRC) techniques. Impressions were made 3 times each in 5 participants (N=15). The impressions were then compared within and between the test groups. The cast surfaces were measured point-to-point using the signed nearest neighbor method. Precision was calculated from the (90%-10%)/2 percentile value. The precision ranged from 12.3 μm (VSE) to 167.2 μm (ALG), with the highest precision in the VSE and VSES groups. The deviation pattern varied distinctly according to the impression method. Conventional impressions showed the highest accuracy across the complete dental arch in all groups, except for the ALG group. Conventional and digital impression methods differ significantly in the complete-arch accuracy. Digital impression systems had higher local deviations within the complete arch cast; however, they achieve equal and higher precision than some conventional impression materials. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
A 30-day-ahead forecast model for grass pollen in north London, United Kingdom.
Smith, Matt; Emberlin, Jean
2006-03-01
A 30-day-ahead forecast method has been developed for grass pollen in north London. The total period of the grass pollen season is covered by eight multiple regression models, each covering a 10-day period running consecutively from 21 May to 8 August. This means that three models were used for each 30-day forecast. The forecast models were produced using grass pollen and environmental data from 1961 to 1999 and tested on data from 2000 and 2002. Model accuracy was judged in two ways: the number of times the forecast model was able to successfully predict the severity (relative to the 1961-1999 dataset as a whole) of grass pollen counts in each of the eight forecast periods on a scale of 1 to 4; the number of times the forecast model was able to predict whether grass pollen counts were higher or lower than the mean. The models achieved 62.5% accuracy in both assessment years when predicting the relative severity of grass pollen counts on a scale of 1 to 4, which equates to six of the eight 10-day periods being forecast correctly. The models attained 87.5% and 100% accuracy in 2000 and 2002, respectively, when predicting whether grass pollen counts would be higher or lower than the mean. Attempting to predict pollen counts during distinct 10-day periods throughout the grass pollen season is a novel approach. The models also employed original methodology in the use of winter averages of the North Atlantic Oscillation to forecast 10-day means of allergenic pollen counts.
NASA Astrophysics Data System (ADS)
Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun
2014-03-01
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
Speech recognition for embedded automatic positioner for laparoscope
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin
2014-07-01
In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.
Collimation testing using slit Fresnel diffraction
NASA Astrophysics Data System (ADS)
Luo, Xiaohe; Hui, Mei; Wang, Shanshan; Hou, Yinlong; Zhou, Siyu; Zhu, Qiudong
2018-03-01
A simple collimation testing method based on slit Fresnel diffraction is proposed. The method needs only a CMOS and a slit with no requirement in dimensional accuracy. The light beam to be tested diffracts across the slit and forms a Fresnel diffraction pattern received by CMOS. After analysis, the defocusing amount and the distance between the primary peak point and secondary peak point of diffraction pattern fulfill an expression relationship and then the defocusing amount can be deduced from the expression. The method is applied to both the coherent beam and partially coherent beam, and these two beams are emitted from a laser and light-emitting diode (LED) with a spectrum width of about 50 nm in this paper. Simulations show that the wide spectrum of LED has the effect of smooth filtering to provide higher accuracy. Experiments show that the LED with a spectrum width of about 50 nm has a lower limitation error than the laser and can achieve up to 58.1601 μm with focal length 200 mm and slit width 15 mm.
Nurses' maths: researching a practical approach.
Wilson, Ann
To compare a new practical maths test with a written maths test. The tests were undertaken by qualified nurses training for intravenous drug administration, a skill dependent on maths accuracy. The literature showed that the higher education institutes (HEIs) that provide nurse training use traditional maths tests, a practical way of testing maths had not been described. Fifty five nurses undertook two maths tests based on intravenous drug calculations. One was a traditional written test. The second was a new type of test using a simulated clinical environment. All participants were also interviewed one week later to ascertain their thoughts and feelings about the tests. There was a significant improvement in maths test scores for those nurses who took the practical maths test first. It is suggested that this is because it improved their conceptualisation skills and thus helped them to achieve accuracy in their calculations. Written maths tests are not the best way to help and support nurses in acquiring and improving their maths skills and should be replaced by a more practical approach.
Guillem, M Salud; Sahakian, Alan V; Swiryn, Steven
2008-01-01
The objective of this study was the evaluation of the accuracy of Dower inverse transform for the derivation of the P wave in orthogonal leads. We tested the accuracy of Dower transform on the P wave and compared it with a P-wave-optimized transform in a database of 123 simultaneous recordings of electrocardiograms and vectorcardiograms. This new transform achieved a lower error when we compared derived vs true measured P waves (mean +/- SD, 12.2 +/- 8.0 VRMS) than Dower transform (14.4 +/- 9.5 Root mean squared voltage) and higher correlation values (Rx, 0.93 +/- 0.12; Ry, 0.90 +/- 0.27; Rz, 0.91 +/- 0.18; vs Dower: Rx, 0.88 +/- 0.15; Ry, 0.91 +/- 0.26; Rz, 0.85 +/- 0.23). We conclude that derivation of orthogonal leads for the P wave can be improved by using an atrial-based transform matrix.
Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space
NASA Astrophysics Data System (ADS)
Ruggeri, Michele; Moroni, Saverio; Holzmann, Markus
2018-05-01
We show that the recently introduced iterative backflow wave function can be interpreted as a general neural network in continuum space with nonlinear functions in the hidden units. Using this wave function in variational Monte Carlo simulations of liquid 4He in two and three dimensions, we typically find a tenfold increase in accuracy over currently used wave functions. Furthermore, subsequent stages of the iteration procedure define a set of increasingly good wave functions, each with its own variational energy and variance of the local energy: extrapolation to zero variance gives energies in close agreement with the exact values. For two dimensional 4He, we also show that the iterative backflow wave function can describe both the liquid and the solid phase with the same functional form—a feature shared with the shadow wave function, but now joined by much higher accuracy. We also achieve significant progress for liquid 3He in three dimensions, improving previous variational and fixed-node energies.
Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M
2014-01-01
The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.
NASA Astrophysics Data System (ADS)
Li, Ping; Jin, Tan; Guo, Zongfu; Lu, Ange; Qu, Meina
2016-10-01
High efficiency machining of large precision optical surfaces is a challenging task for researchers and engineers worldwide. The higher form accuracy and lower subsurface damage helps to significantly reduce the cycle time for the following polishing process, save the cost of production, and provide a strong enabling technology to support the large telescope and laser energy fusion projects. In this paper, employing an Infeed Grinding (IG) mode with a rotary table and a cup wheel, a multi stage grinding process chain, as well as precision compensation technology, a Φ300mm diameter plano mirror is ground by the Schneider Surfacing Center SCG 600 that delivers a new level of quality and accuracy when grinding such large flats. Results show a PV form error of Pt<2 μm, the surface roughness Ra<30 nm and Rz<180 nm, with subsurface damage <20 μm, and a material removal rates of up to 383.2 mm3/s.
NASA Astrophysics Data System (ADS)
Lafont, F.; Ribeiro-Palau, R.; Kazazis, D.; Michon, A.; Couturaud, O.; Consejo, C.; Chassagne, T.; Zielinski, M.; Portail, M.; Jouault, B.; Schopfer, F.; Poirier, W.
2015-04-01
Replacing GaAs by graphene to realize more practical quantum Hall resistance standards (QHRS), accurate to within 10-9 in relative value, but operating at lower magnetic fields than 10 T, is an ongoing goal in metrology. To date, the required accuracy has been reported, only few times, in graphene grown on SiC by Si sublimation, under higher magnetic fields. Here, we report on a graphene device grown by chemical vapour deposition on SiC, which demonstrates such accuracies of the Hall resistance from 10 T up to 19 T at 1.4 K. This is explained by a quantum Hall effect with low dissipation, resulting from strongly localized bulk states at the magnetic length scale, over a wide magnetic field range. Our results show that graphene-based QHRS can replace their GaAs counterparts by operating in as-convenient cryomagnetic conditions, but over an extended magnetic field range. They rely on a promising hybrid and scalable growth method and a fabrication process achieving low-electron-density devices.
Measurement of Flat Slab Deformations by the Multi-Image Photogrammetry Method
NASA Astrophysics Data System (ADS)
Marčiš, Marián; Fraštia, Marek; Augustín, Tomáš
2017-12-01
The use of photogrammetry during load tests of building components is a common practise all over the world. It is very effective thanks to its contactless approach, 3D measurement, fast data collection, and partial or full automation of image processing; it can deliver very accurate results. Multi-image convergent photogrammetry supported by artificial coded targets is the most accurate photogrammetric method when the targets are detected in an image with a higher degree of accuracy than a 0.1 pixel. It is possible to achieve an accuracy of 0.03 mm for all the points measured on the object observed if the camera is close enough to the object, and the positions of the camera and the number of shots are precisely planned. This contribution deals with the design of a special hanging frame for a DSLR camera used during the photogrammetric measurement of the deformation of flat concrete slab. The results of the photogrammetric measurements are compared to the results from traditional contact measurement techniques during load tests.
NASA Astrophysics Data System (ADS)
Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei
2003-03-01
An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.
Video Feedback in Key Word Signing Training for Preservice Direct Support Staff.
Rombouts, Ellen; Meuris, Kristien; Maes, Bea; De Meyer, Anne-Marie; Zink, Inge
2016-04-01
Research has demonstrated that formal training is essential for professionals to learn key word signing. Yet, the particular didactic strategies have not been studied. Therefore, this study compared the effectiveness of verbal and video feedback in a key word signing training for future direct support staff. Forty-nine future direct support staff were randomly assigned to 1 of 3 key word signing training programs: modeling and verbal feedback (classical method [CM]), additional video feedback (+ViF), and additional video feedback and photo reminder (+ViF/R). Signing accuracy and training acceptability were measured 1 week after and 7 months after training. Participants from the +ViF/R program achieved significantly higher signing accuracy compared with the CM group. Acceptability ratings did not differ between any of the groups. Results suggest that at an equal time investment, the programs containing more training components were more effective. Research on the effect of rehearsal on signing maintenance is warranted.
Design of an autofocus capsule endoscope system and the corresponding 3D reconstruction algorithm.
Zhang, Wei; Jin, Yi-Tao; Guo, Xin; Su, Jin-Hui; You, Su-Ping
2016-10-01
A traditional capsule endoscope can only take 2D images, and most of the images are not clear enough to be used for diagnosing. A 3D capsule endoscope can help doctors make a quicker and more accurate diagnosis. However, blurred images negatively affect reconstruction accuracy. A compact, autofocus capsule endoscope system is designed in this study. Using a liquid lens, the system can be electronically controlled to autofocus, and without any moving elements. The depth of field of the system is in the 3-100 mm range and its field of view is about 110°. The images captured by this optical system are much clearer than those taken by a traditional capsule endoscope. A 3D reconstruction algorithm is presented to adapt to the zooming function of our proposed system. Simulations and experiments have shown that more feature points can be correctly matched and a higher reconstruction accuracy can be achieved by this strategy.
Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform
NASA Astrophysics Data System (ADS)
Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.
2017-12-01
In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.
Smith, A F; Baxter, S D; Hitchcock, D B; Finney, C J; Royer, J A; Guinn, C H
2016-09-01
To investigate the relationship of reporting accuracy in 24-h dietary recalls to child-respondent characteristics-cognitive ability, social desirability, body mass index (BMI) percentile and socioeconomic status (SES). Fourth-grade children (mean age 10.1 years) were observed eating two school meals and interviewed about dietary intake for 24 h that included those meals. (Eight multiple-pass interview protocols operationalized the conditions of an experiment that crossed two retention intervals-short and long-with four prompts (ways of eliciting reports in the first pass)). Academic achievement-test scores indexed cognitive ability; social desirability was assessed by questionnaire; height and weight were measured to calculate BMI; nutrition-assistance program eligibility information was obtained to index SES. Reported intake was compared to observed intake to calculate measures of reporting accuracy for school meals at the food-item (omission rate; intrusion rate) and energy (correspondence rate; inflation ratio) levels. Complete data were available for 425 of 480 validation-study participants. Controlling for manipulated variables and other measured respondent characteristics, for one or more of the outcome variables, reporting accuracy increased with cognitive ability (omission rate, intrusion rate, correspondence rate, P<0.001), decreased with social desirability (correspondence rate, P<0.0004), decreased with BMI percentile (correspondence rate, P=0.001) and was better by higher- than by lower-SES children (intrusion rate, P=0.001). Some of these effects were moderated by interactions with retention interval and sex. Children's dietary-reporting accuracy is systematically related to such respondent characteristics as cognitive ability, social desirability, BMI percentile and SES.
Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus
2018-06-01
Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.
Land use/land cover mapping using multi-scale texture processing of high resolution data
NASA Astrophysics Data System (ADS)
Wong, S. N.; Sarker, M. L. R.
2014-02-01
Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.
Li, Dongrui; Cheng, Zhigang; Chen, Gang; Liu, Fangyi; Wu, Wenbo; Yu, Jie; Gu, Ying; Liu, Fengyong; Ren, Chao; Liang, Ping
2018-04-03
To test the accuracy and efficacy of the multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors in phantom and animal models. To evaluate and compare the influences of intervention experience on robot-assisted and ultrasound-controlled ablation procedures. Accuracy tests on rigid body/phantom model with a respiratory movement simulation device and microwave ablation tests on porcine liver tumor/rabbit liver cancer were performed with the robot we designed or with the traditional ultrasound-guidance by physicians with or without intervention experience. In the accuracy tests performed by the physicians without intervention experience, the insertion accuracy and efficiency of robot-assisted group was higher than those of ultrasound-guided group with statistically significant differences. In the microwave ablation tests performed by the physicians without intervention experience, better complete ablation rate was achieved when applying the robot. In the microwave ablation tests performed by the physicians with intervention experience, there was no statistically significant difference of the insertion number and total ablation time between the robot-assisted group and the ultrasound-controlled group. The evaluation by the NASA-TLX suggested that the robot-assisted insertion and microwave ablation process performed by physicians with or without experience were more comfortable. The multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors could increase the insertion accuracy and ablation efficacy, and minimize the influence of the physicians' experience. The ablation procedure could be more comfortable with less stress with the application of the robot.
Role of endoscopic ultrasonography in the staging and follow-up of esophageal cancer.
Lightdale, Charles J; Kulkarni, Ketan G
2005-07-10
To evaluate the role of endoscopic ultrasonography (EUS) in the initial staging and follow-up of esophageal cancer on the basis of a review of the published literature. Articles published from 1985 to 2005 were searched and reviewed using the following keywords: "esophageal cancer staging," "endoscopic ultrasound," and "endoscopic ultrasonography." For initial anatomic staging, EUS results have consistently shown more than 80% accuracy compared with surgical pathology for depth of tumor invasion (T). Accuracy increased with higher stage, and was >90% for T3 cancer. EUS results have shown accuracy in the range of 75% for initial staging of regional lymph nodes (N). EUS has been invariably more accurate than computed tomography for T and N staging. EUS is limited for staging distant metastases (M), and therefore EUS is usually performed after a body imaging modality such as computed tomography or positron emission tomography. Pathologic staging can be achieved at EUS using fine-needle aspiration (FNA) to obtain cytology from suspect Ns. FNA has had greatest efficacy in confirming celiac axis lymph node metastases with more than 90% accuracy. EUS is inaccurate for staging after radiation and chemotherapy because of inability to distinguish inflammation and fibrosis from residual cancer, but a more than 50% decrease in tumor cross-sectional area or diameter has been found to correlate with treatment response. EUS has a central role in the initial anatomic staging of esophageal cancer because of its high accuracy in determining the extent of locoregional disease. EUS is inaccurate for staging after radiation therapy and chemotherapy, but can be useful in assessing treatment response.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranganathan, V; Kumar, P; Bzdusek, K
Purpose: We propose a novel data-driven method to predict the achievability of clinical objectives upfront before invoking the IMRT optimization. Methods: A new metric called “Geometric Complexity (GC)” is used to estimate the achievability of clinical objectives. Here, GC is the measure of the number of “unmodulated” beamlets or rays that intersect the Region-of-interest (ROI) and the target volume. We first compute the geometric complexity ratio (GCratio) between the GC of a ROI (say, parotid) in a reference plan and the GC of the same ROI in a given plan. The GCratio of a ROI indicates the relative geometric complexitymore » of the ROI as compared to the same ROI in the reference plan. Hence GCratio can be used to predict if a defined clinical objective associated with the ROI can be met by the optimizer for a given case. Basically a higher GCratio indicates a lesser likelihood for the optimizer to achieve the clinical objective defined for a given ROI. Similarly, a lower GCratio indicates a higher likelihood for the optimizer to achieve the clinical objective defined for the given ROI. We have evaluated the proposed method on four Head and Neck cases using Pinnacle3 (version 9.10.0) Treatment Planning System (TPS). Results: Out of the total of 28 clinical objectives from four head and neck cases included in the study, 25 were in agreement with the prediction, which implies an agreement of about 85% between predicted and obtained results. The Pearson correlation test shows a positive correlation between predicted and obtained results (Correlation = 0.82, r2 = 0.64, p < 0.005). Conclusion: The study demonstrates the feasibility of the proposed method in head and neck cases for predicting the achievability of clinical objectives with reasonable accuracy.« less
A new fabrication method for precision antenna reflectors for space flight and ground test
NASA Technical Reports Server (NTRS)
Sharp, G. Richard; Wanhainen, Joyce S.; Ketelsen, Dean A.
1991-01-01
Communications satellites are using increasingly higher frequencies that require increasingly precise antenna reflectors for use in space. Traditional industry fabrication methods for space antenna reflectors employ successive modeling techniques using high- and low-temperature molds for reflector face sheets and then a final fit-up of the completed honeycomb sandwich panel antenna reflector to a master pattern. However, as new missions are planned at much higher frequencies, greater accuracies will be necessary than are achievable using these present methods. A new approach for the fabrication of ground-test solid-surface antenna reflectors is to build a rigid support structure with an easy-to-machine surface. This surface is subsequently machined to the desired reflector contour and coated with a radio-frequency-reflective surface. This method was used to fabricate a 2.7-m-diameter ground-test antenna reflector to an accuracy of better than 0.013 mm (0.0005 in.) rms. A similar reflector for use on spacecraft would be constructed in a similar manner but with space-qualified materials. The design, analysis, and fabrication of the 2.7-m-diameter precision antenna reflector for antenna ground tests and the extension of this technology to precision, space-based antenna reflectors are described.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
NASA Astrophysics Data System (ADS)
Andersen, Mie; Plaisance, Craig P.; Reuter, Karsten
2017-10-01
First-principles screening studies aimed at predicting the catalytic activity of transition metal (TM) catalysts have traditionally been based on mean-field (MF) microkinetic models, which neglect the effect of spatial correlations in the adsorbate layer. Here we critically assess the accuracy of such models for the specific case of CO methanation over stepped metals by comparing to spatially resolved kinetic Monte Carlo (kMC) simulations. We find that the typical low diffusion barriers offered by metal surfaces can be significantly increased at step sites, which results in persisting correlations in the adsorbate layer. As a consequence, MF models may overestimate the catalytic activity of TM catalysts by several orders of magnitude. The potential higher accuracy of kMC models comes at a higher computational cost, which can be especially challenging for surface reactions on metals due to a large disparity in the time scales of different processes. In order to overcome this issue, we implement and test a recently developed algorithm for achieving temporal acceleration of kMC simulations. While the algorithm overall performs quite well, we identify some challenging cases which may lead to a breakdown of acceleration algorithms and discuss possible directions for future algorithm development.
Fast Face-Recognition Optical Parallel Correlator Using High Accuracy Correlation Filter
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Kodate, Kashiko
2005-11-01
We designed and fabricated a fully automatic fast face recognition optical parallel correlator [E. Watanabe and K. Kodate: Appl. Opt. 44 (2005) 5666] based on the VanderLugt principle. The implementation of an as-yet unattained ultra high-speed system was aided by reconfiguring the system to make it suitable for easier parallel processing, as well as by composing a higher accuracy correlation filter and high-speed ferroelectric liquid crystal-spatial light modulator (FLC-SLM). In running trial experiments using this system (dubbed FARCO), we succeeded in acquiring remarkably low error rates of 1.3% for false match rate (FMR) and 2.6% for false non-match rate (FNMR). Given the results of our experiments, the aim of this paper is to examine methods of designing correlation filters and arranging database image arrays for even faster parallel correlation, underlining the issues of calculation technique, quantization bit rate, pixel size and shift from optical axis. The correlation filter has proved its excellent performance and higher precision than classical correlation and joint transform correlator (JTC). Moreover, arrangement of multi-object reference images leads to 10-channel correlation signals, as sharply marked as those of a single channel. This experiment result demonstrates great potential for achieving the process speed of 10000 face/s.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
Assessment of the effects of CT dose in averaged x-ray CT images of a dose-sensitive polymer gel
NASA Astrophysics Data System (ADS)
Kairn, T.; Kakakhel, M. B.; Johnston, H.; Jirasek, A.; Trapp, J. V.
2015-01-01
The signal-to-noise ratio achievable in x-ray computed tomography (CT) images of polymer gels can be increased by averaging over multiple scans of each sample. However, repeated scanning delivers a small additional dose to the gel which may compromise the accuracy of the dose measurement. In this study, a NIPAM-based polymer gel was irradiated and then CT scanned 25 times, with the resulting data used to derive an averaged image and a "zero-scan" image of the gel. Comparison between these two results and the first scan of the gel showed that the averaged and zero-scan images provided better contrast, higher contrast-to- noise and higher signal-to-noise than the initial scan. The pixel values (Hounsfield units, HU) in the averaged image were not noticeably elevated, compared to the zero-scan result and the gradients used in the linear extrapolation of the zero-scan images were small and symmetrically distributed around zero. These results indicate that the averaged image was not artificially lightened by the small, additional dose delivered during CT scanning. This work demonstrates the broader usefulness of the zero-scan method as a means to verify the dosimetric accuracy of gel images derived from averaged x-ray CT data.
Pitch sensation involves stochastic resonance
Martignoli, Stefan; Gomez, Florian; Stoop, Ruedi
2013-01-01
Pitch is a complex hearing phenomenon that results from elicited and self-generated cochlear vibrations. Read-off vibrational information is relayed higher up the auditory pathway, where it is then condensed into pitch sensation. How this can adequately be described in terms of physics has largely remained an open question. We have developed a peripheral hearing system (in hardware and software) that reproduces with great accuracy all salient pitch features known from biophysical and psychoacoustic experiments. At the level of the auditory nerve, the system exploits stochastic resonance to achieve this performance, which may explain the large amount of noise observed in the working auditory nerve. PMID:24045830
Development of silicon carbide mirrors: the example of the Sofia secondary mirror
NASA Astrophysics Data System (ADS)
Fruit, Michel; Antoine, Pascal
2017-11-01
The 352 mm tip-tilt SOFIA Secondary Mirror has been developed by the ASTRIUM / BOOSTEC joint venture SiCSPACE, taking full benefit of the intrinsic properties of the BOOSTEC S-SiC sintered material, associated to qualified processes specifically developed for space borne mirrors by ASTRIUM SAS. Achieved performances include a low mass of 1.7 kg, a very high stiffness with a first resonant frequency higher than 2000 Hz and an optical surface accuracy corresponding to a maximum WFE of 50 nm rms. This mirror is part of the joint NASA-DLR project for a 2.5 m airborne Stratospheric Observatory For Infrared Astronomy (SOFIA).
Boeing Low-Thrust Geosynchronous Transfer Mission Experience
NASA Technical Reports Server (NTRS)
Poole, Mark; Ho, Monte
2007-01-01
Since 2000, Boeing 702 satellites have used electric propulsion for transfer to geostationary orbits. The use of the 25cm Xenon Ion Propulsion System (25cm XIPS) results in more than a tenfold increase in specific impulse with the corresponding decrease in propellant mass needed to complete the mission when compared to chemical propulsion[1]. In addition to more favorable mass properties, with the use of XIPS, the 702 has been able to achieve orbit insertions with higher accuracy than it would have been possible with the use of chemical thrusters. This paper describes the experience attained by using the 702 XIPS ascent strategy to transfer satellite to geosynchronous orbits.
nextPARS: parallel probing of RNA structures in Illumina
Saus, Ester; Willis, Jesse R.; Pryszcz, Leszek P.; Hafez, Ahmed; Llorens, Carlos; Himmelbauer, Heinz
2018-01-01
RNA molecules play important roles in virtually every cellular process. These functions are often mediated through the adoption of specific structures that enable RNAs to interact with other molecules. Thus, determining the secondary structures of RNAs is central to understanding their function and evolution. In recent years several sequencing-based approaches have been developed that allow probing structural features of thousands of RNA molecules present in a sample. Here, we describe nextPARS, a novel Illumina-based implementation of in vitro parallel probing of RNA structures. Our approach achieves comparable accuracy to previous implementations, while enabling higher throughput and sample multiplexing. PMID:29358234
Study on Fuzzy Adaptive Fractional Order PIλDμ Control for Maglev Guiding System
NASA Astrophysics Data System (ADS)
Hu, Qing; Hu, Yuwei
The mathematical model of the linear elevator maglev guiding system is analyzed in this paper. For the linear elevator needs strong stability and robustness to run, the integer order PID was expanded to the fractional order, in order to improve the steady state precision, rapidity and robustness of the system, enhance the accuracy of the parameter in fractional order PIλDμ controller, the fuzzy control is combined with the fractional order PIλDμ control, using the fuzzy logic achieves the parameters online adjustment. The simulations reveal that the system has faster response speed, higher tracking precision, and has stronger robustness to the disturbance.
The along track scanning radiometer for ERS-1 - Scan geometry and data simulation
NASA Astrophysics Data System (ADS)
Prata, A. J. Fred; Cechet, Robert P.; Barton, Ian J.; Llewellyn-Jones, David T.
1990-01-01
The first European remote-sensing satellite (ERS-1), due to be launched in 1990, will carry the along track scanning radiometer (ATSR), which has been specifically designed to give accurate satellite measurements of sea surface temperature (SST). Details of the novel scanning technique used by the ATSR are given, and data from the NOAA-9 AVHRR instrument are used to simulate raw ATSR imagery. Because of the high precision of the onboard blackbodies, the active cooling of the detectors, 12-b digitization, and dual-angle capability, the ATSR promises to achieve higher-accuracy satellite-derived SSTs than are currently available.
Fulford, Janice M.; Clayton, Christopher S.
2015-10-09
The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.
Optimizing Uas Image Acquisition and Geo-Registration for Precision Agriculture
NASA Astrophysics Data System (ADS)
Hearst, A. A.; Cherkauer, K. A.; Rainey, K. M.
2014-12-01
Unmanned Aircraft Systems (UASs) can acquire imagery of crop fields in various spectral bands, including the visible, near-infrared, and thermal portions of the spectrum. By combining techniques of computer vision, photogrammetry, and remote sensing, these images can be stitched into precise, geo-registered maps, which may have applications in precision agriculture and other industries. However, the utility of these maps will depend on their positional accuracy. Therefore, it is important to quantify positional accuracy and consider the tradeoffs between accuracy, field site setup, and the computational requirements for data processing and analysis. This will enable planning of data acquisition and processing to obtain the required accuracy for a given project. This study focuses on developing and evaluating methods for geo-registration of raw aerial frame photos acquired by a small fixed-wing UAS. This includes visual, multispectral, and thermal imagery at 3, 6, and 14 cm/pix resolutions, respectively. The study area is 10 hectares of soybean fields at the Agronomy Center for Research and Education (ACRE) at Purdue University. The dataset consists of imagery from 6 separate days of flights (surveys) and supporting ground measurements. The Direct Sensor Orientation (DiSO) and Integrated Sensor Orientation (InSO) methods for geo-registration are tested using 16 Ground Control Points (GCPs). Subsets of these GCPs are used to test for the effects of different numbers and spatial configurations of GCPs on positional accuracy. The horizontal and vertical Root Mean Squared Error (RMSE) is used as the primary metric of positional accuracy. Preliminary results from 1 of the 6 surveys show that the DiSO method (0 GCPs used) achieved an RMSE in the X, Y, and Z direction of 2.46 m, 1.04 m, and 1.91 m, respectively. InSO using 5 GCPs achieved an RMSE of 0.17 m, 0.13 m, and 0.44 m. InSO using 10 GCPs achieved an RMSE of 0.10 m, 0.09 m, and 0.12 m. Further analysis will identify the optimal spatial configuration and number of GCPs needed to achieve sub-meter RMSE, which is considered a benchmark for precision agriculture purposes. Additional benefits of superior positional accuracy will also be explored.
Translation position determination in ptychographic coherent diffraction imaging.
Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M
2013-06-03
Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.
Photon caliper to achieve submillimeter positioning accuracy
NASA Astrophysics Data System (ADS)
Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan
2017-09-01
The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.
Spatial Lattice Modulation for MIMO Systems
NASA Astrophysics Data System (ADS)
Choi, Jiwook; Nam, Yunseo; Lee, Namyoon
2018-06-01
This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.
Generating clustered scale-free networks using Poisson based localization of edges
NASA Astrophysics Data System (ADS)
Türker, İlker
2018-05-01
We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.
Reichert, Christoph; Dürschmid, Stefan; Heinze, Hans-Jochen; Hinrichs, Hermann
2017-01-01
In brain-computer interface (BCI) applications the detection of neural processing as revealed by event-related potentials (ERPs) is a frequently used approach to regain communication for people unable to interact through any peripheral muscle control. However, the commonly used electroencephalography (EEG) provides signals of low signal-to-noise ratio, making the systems slow and inaccurate. As an alternative noninvasive recording technique, the magnetoencephalography (MEG) could provide more advantageous electrophysiological signals due to a higher number of sensors and the magnetic fields not being influenced by volume conduction. We investigated whether MEG provides higher accuracy in detecting event-related fields (ERFs) compared to detecting ERPs in simultaneously recorded EEG, both evoked by a covert attention task, and whether a combination of the modalities is advantageous. In our approach, a detection algorithm based on spatial filtering is used to identify ERP/ERF components in a data-driven manner. We found that MEG achieves higher decoding accuracy (DA) compared to EEG and that the combination of both further improves the performance significantly. However, MEG data showed poor performance in cross-subject classification, indicating that the algorithm's ability for transfer learning across subjects is better in EEG. Here we show that BCI control by covert attention is feasible with EEG and MEG using a data-driven spatial filter approach with a clear advantage of the MEG regarding DA but with a better transfer learning in EEG. PMID:29085279
Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie
2017-10-01
By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.
Verification of Software: The Textbook and Real Problems
NASA Technical Reports Server (NTRS)
Carlson, Jan-Renee
2006-01-01
The process of verification, or determining the order of accuracy of computational codes, can be problematic when working with large, legacy computational methods that have been used extensively in industry or government. Verification does not ensure that the computer program is producing a physically correct solution, it ensures merely that the observed order of accuracy of solutions are the same as the theoretical order of accuracy. The Method of Manufactured Solutions (MMS) is one of several ways for determining the order of accuracy. MMS is used to verify a series of computer codes progressing in sophistication from "textbook" to "real life" applications. The degree of numerical precision in the computations considerably influenced the range of mesh density to achieve the theoretical order of accuracy even for 1-D problems. The choice of manufactured solutions and mesh form shifted the observed order in specific areas but not in general. Solution residual (iterative) convergence was not always achieved for 2-D Euler manufactured solutions. L(sub 2,norm) convergence differed variable to variable therefore an observed order of accuracy could not be determined conclusively in all cases, the cause of which is currently under investigation.
Beck, William; Kabiche, Sofiane; Balde, Issa-Bella; Carret, Sandra; Fontan, Jean-Eudes; Cisternino, Salvatore; Schlatter, Joël
2016-12-01
To assess the stability of pharmaceutical suxamethonium (succinylcholine) solution for injection by validated stability-indicating chromatographic method in vials stored at room temperature. The chromatographic assay was achieved by using a detector wavelength set at 218 nm, a C18 column, and an isocratic mobile phase (100% of water) at a flow rate of 0.6 mL/min for 5 minutes. The method was validated according to the International Conference on Harmonization guidelines with respect to the stability-indicating capacity of the method including linearity, limits of detection and quantitation, precision, accuracy, system suitability, robustness, and forced degradations. Linearity was achieved in the concentration range of 5 to 40 mg/mL with a correlation coefficient higher than 0.999. The limits of detection and quantification were 0.8 and 0.9 mg/mL, respectively. The percentage relative standard deviation for intraday (1.3-1.7) and interday (0.1-2.0) precision was found to be less than 2.1%. Accuracy was assessed by the recovery test of suxamethonium from solution for injection (99.5%-101.2%). Storage of suxamethonium solution for injection vials at ambient temperature (22°C-26°C) for 17 days demonstrated that at least 95% of original suxamethonium concentration remained stable. Copyright © 2016 Elsevier Inc. All rights reserved.
A fuzzy neural network for intelligent data processing
NASA Astrophysics Data System (ADS)
Xie, Wei; Chu, Feng; Wang, Lipo; Lim, Eng Thiam
2005-03-01
In this paper, we describe an incrementally generated fuzzy neural network (FNN) for intelligent data processing. This FNN combines the features of initial fuzzy model self-generation, fast input selection, partition validation, parameter optimization and rule-base simplification. A small FNN is created from scratch -- there is no need to specify the initial network architecture, initial membership functions, or initial weights. Fuzzy IF-THEN rules are constantly combined and pruned to minimize the size of the network while maintaining accuracy; irrelevant inputs are detected and deleted, and membership functions and network weights are trained with a gradient descent algorithm, i.e., error backpropagation. Experimental studies on synthesized data sets demonstrate that the proposed Fuzzy Neural Network is able to achieve accuracy comparable to or higher than both a feedforward crisp neural network, i.e., NeuroRule, and a decision tree, i.e., C4.5, with more compact rule bases for most of the data sets used in our experiments. The FNN has achieved outstanding results for cancer classification based on microarray data. The excellent classification result for Small Round Blue Cell Tumors (SRBCTs) data set is shown. Compared with other published methods, we have used a much fewer number of genes for perfect classification, which will help researchers directly focus their attention on some specific genes and may lead to discovery of deep reasons of the development of cancers and discovery of drugs.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
NASA Astrophysics Data System (ADS)
Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.
2017-09-01
Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy with the proposed classification scheme is 94.91 %, while that with the conventional classification scheme is 93.70 %. Moreover, for multi-temporal UAVSAR data, the averaged overall classification accuracy with the proposed classification scheme is up to 97.08 %, which is much higher than the 87.79 % from the conventional classification scheme. Furthermore, for multitemporal PolSAR data, the proposed classification scheme can achieve better robustness. The comparison studies also clearly demonstrate that mining and utilization of hidden polarimetric features and information in the rotation domain can gain the added benefits for PolSAR land cover classification and provide a new vision for PolSAR image interpretation and application.
A range-based predictive localization algorithm for WSID networks
NASA Astrophysics Data System (ADS)
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
Radio interferometric measurements for accurate planetary orbiter navigation
NASA Technical Reports Server (NTRS)
Poole, S. R.; Ananda, M.; Hildebrand, C. E.
1979-01-01
The use of narrowband delta-VLBI to achieve accurate orbit determination is presented by viewing a spacecraft from widely separated stations followed by viewing a nearby quasar from the same stations. Current analysis is examined that establishes the orbit determination accuracy achieved with data arcs spanning up to 3.5 d. Strategies for improving prediction accuracy are given, and the performance of delta-VLBI is compared with conventional radiometric tracking data. It is found that accuracy 'within the fit' is on the order of 0.5 km for data arcs having delta-VLBI on the ends of the arcs and for arc lengths varying from one baseline to 3.5 d. The technique is discussed with reference to the proposed Venus Orbiting Imaging Radar mission.
Factors involved in making post-performance judgments in mathematics problem-solving.
García Fernández, Trinidad; Kroesbergen, Evelyn; Rodríguez Pérez, Celestino; González-Castro, Paloma; González-Pienda, Julio A
2015-01-01
This study examines the impact of executive functions, affective-motivational variables related to mathematics, mathematics achievement and task characteristics on fifth and sixth graders’ calibration accuracy after completing two mathematical problems. A sample of 188 students took part in the study. They were divided into two groups as function of their judgment accuracy after completing the two tasks (accurate= 79, inaccurate= 109). Differences between these groups were examined. The discriminative value of these variables to predict group membership was analyzed, as well as the effect of age, gender, and grade level. The results indicated that accurate students showed better levels of executive functioning, and more positive feelings, beliefs, and motivation related to mathematics. They also spent more time on the tasks. Mathematics achievement, perceived usefulness of mathematics, and time spent on Task 1 significantly predicted group membership, classifying 71.3% of the sample correctly. These results support the relationship between academic achievement and calibration accuracy, suggesting the need to consider a wide range of factors when explaining performance judgments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braak, Sicco J., E-mail: sjbraak@gmail.com; Zuurmond, Kirsten, E-mail: kirsten.zuurmond@philips.com; Aerts, Hans C. J., E-mail: hans.cj.aerts@philips.com
2013-08-01
ObjectiveTo investigate the accuracy, procedure time, fluoroscopy time, and dose area product (DAP) of needle placement during percutaneous vertebroplasty (PVP) using cone-beam computed tomography (CBCT) guidance versus fluoroscopy.Materials and MethodsOn 4 spine phantoms with 11 vertebrae (Th7-L5), 4 interventional radiologists (2 experienced with CBCT guidance and two inexperienced) punctured all vertebrae in a bipedicular fashion. Each side was randomization to either CBCT guidance or fluoroscopy. CBCT guidance is a sophisticated needle guidance technique using CBCT, navigation software, and real-time fluoroscopy. The placement of the needle had to be to a specific target point. After the procedure, CBCT was performed tomore » determine the accuracy, procedure time, fluoroscopy time, and DAP. Analysis of the difference between methods and experience level was performed.ResultsMean accuracy using CBCT guidance (2.61 mm) was significantly better compared with fluoroscopy (5.86 mm) (p < 0.0001). Procedure time was in favor of fluoroscopy (7.39 vs. 10.13 min; p = 0.001). Fluoroscopy time during CBCT guidance was lower, but this difference is not significant (71.3 vs. 95.8 s; p = 0.056). DAP values for CBCT guidance and fluoroscopy were 514 and 174 mGy cm{sup 2}, respectively (p < 0.0001). There was a significant difference in favor of experienced CBCT guidance users regarding accuracy for both methods, procedure time of CBCT guidance, and added DAP values for fluoroscopy.ConclusionCBCT guidance allows users to perform PVP more accurately at the cost of higher patient dose and longer procedure time. Because procedural complications (e.g., cement leakage) are related to the accuracy of the needle placement, improvements in accuracy are clinically relevant. Training in CBCT guidance is essential to achieve greater accuracy and decrease procedure time/dose values.« less
Evaluation of space SAR as a land-cover classification
NASA Technical Reports Server (NTRS)
Brisco, B.; Ulaby, F. T.; Williams, T. H. L.
1985-01-01
The multidimensional approach to the mapping of land cover, crops, and forests is reported. Dimensionality is achieved by using data from sensors such as LANDSAT to augment Seasat and Shuttle Image Radar (SIR) data, using different image features such as tone and texture, and acquiring multidate data. Seasat, Shuttle Imaging Radar (SIR-A), and LANDSAT data are used both individually and in combination to map land cover in Oklahoma. The results indicates that radar is the best single sensor (72% accuracy) and produces the best sensor combination (97.5% accuracy) for discriminating among five land cover categories. Multidate Seasat data and a single data of LANDSAT coverage are then used in a crop classification study of western Kansas. The highest accuracy for a single channel is achieved using a Seasat scene, which produces a classification accuracy of 67%. Classification accuracy increases to approximately 75% when either a multidate Seasat combination or LANDSAT data in a multisensor combination is used. The tonal and textural elements of SIR-A data are then used both alone and in combination to classify forests into five categories.
Problem representation and mathematical problem solving of students of varying math ability.
Krawec, Jennifer L
2014-01-01
The purpose of this study was to examine differences in math problem solving among students with learning disabilities (LD, n = 25), low-achieving students (LA, n = 30), and average-achieving students (AA, n = 29). The primary interest was to analyze the processes students use to translate and integrate problem information while solving problems. Paraphrasing, visual representation, and problem-solving accuracy were measured in eighth grade students using a researcher-modified version of the Mathematical Processing Instrument. Results indicated that both students with LD and LA students struggled with processing but that students with LD were significantly weaker than their LA peers in paraphrasing relevant information. Paraphrasing and visual representation accuracy each accounted for a statistically significant amount of variance in problem-solving accuracy. Finally, the effect of visual representation of relevant information on problem-solving accuracy was dependent on ability; specifically, for students with LD, generating accurate visual representations was more strongly related to problem-solving accuracy than for AA students. Implications for instruction for students with and without LD are discussed.
Muscular and Aerobic Fitness, Working Memory, and Academic Achievement in Children.
Kao, Shih-Chun; Westfall, Daniel R; Parks, Andrew C; Pontifex, Matthew B; Hillman, Charles H
2017-03-01
This study investigated the relationship between aerobic and muscular fitness with working memory and academic achievement in preadolescent children. Seventy-nine 9- to 11-yr-old children completed an aerobic fitness assessment using a graded exercise test; a muscular fitness assessment consisting of upper body, lower body, and core exercises; a serial n-back task to assess working memory; and an academic achievement test of mathematics and reading. Hierarchical regression analyses indicated that after controlling for demographic variables (age, sex, grade, IQ, socioeconomic status), aerobic fitness was associated with greater response accuracy and d' in the 2-back condition and increased mathematic performance in algebraic functions. Muscular fitness was associated with increased response accuracy and d', and longer reaction time in the 2-back condition. Further, the associations of muscular fitness with response accuracy and d' in the 2-back condition were independent of aerobic fitness. The current findings suggest the differential relationships between the aerobic and the muscular aspects of physical fitness with working memory and academic achievement. With the majority of research focusing on childhood health benefits of aerobic fitness, this study suggests the importance of muscular fitness to cognitive health during preadolescence.
Point-of-care ultrasound versus auscultation in determining the position of double-lumen tube
Hu, Wei-Cai; Xu, Lei; Zhang, Quan; Wei, Li; Zhang, Wei
2018-01-01
Abstract This study was designed to assess the accuracy of point-of-care ultrasound in determining the position of double-lumen tubes (DLTs). A total of 103 patients who required DLT intubation were enrolled into the study. After DLTs were tracheal intubated in the supine position, an auscultation researcher and ultrasound researcher were sequentially invited in the operating room to conduct their evaluation of the DLT. After the end of their evaluation, fiberscope researchers (FRs) were invited in the operating room to evaluate the position of DLT using a fiberscope. After the patients were changed to the lateral position, the same evaluation process was repeated. These 3 researchers were blind to each other when they made their conclusions. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were obtained by statistical analysis. When left DLTs (LDLTs) were used, the accuracy of ultrasound (84.2% [72.1%, 92.5%]) was higher than the accuracy of auscultation (59.7% [45.8%, 72.4%]) (P < .01). When right DLTs (RDLTs) were used, the accuracy of ultrasound (89.1% [76.4%, 96.4%]) was higher than the accuracy of auscultation (67.4% [52.0%, 80.5%]) (P < .01). When LDLTs were used in the lateral position, the accuracy of ultrasound (75.4% [62.2%, 85.9%]) was higher than the accuracy of auscultation (54.4% [40.7%, 67.6%]) (P < .05). When RDLT were used, the accuracy of ultrasound (73.9% [58.9%, 85.7%]) was higher than the accuracy of auscultation (47.8% [32.9%, 63.1%]) (P < .05). Assessment via point-of-care ultrasound is superior to auscultation in determining the position of DLTs. PMID:29595696
Point-of-care ultrasound versus auscultation in determining the position of double-lumen tube.
Hu, Wei-Cai; Xu, Lei; Zhang, Quan; Wei, Li; Zhang, Wei
2018-03-01
This study was designed to assess the accuracy of point-of-care ultrasound in determining the position of double-lumen tubes (DLTs).A total of 103 patients who required DLT intubation were enrolled into the study. After DLTs were tracheal intubated in the supine position, an auscultation researcher and ultrasound researcher were sequentially invited in the operating room to conduct their evaluation of the DLT. After the end of their evaluation, fiberscope researchers (FRs) were invited in the operating room to evaluate the position of DLT using a fiberscope. After the patients were changed to the lateral position, the same evaluation process was repeated. These 3 researchers were blind to each other when they made their conclusions. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were obtained by statistical analysis.When left DLTs (LDLTs) were used, the accuracy of ultrasound (84.2% [72.1%, 92.5%]) was higher than the accuracy of auscultation (59.7% [45.8%, 72.4%]) (P < .01). When right DLTs (RDLTs) were used, the accuracy of ultrasound (89.1% [76.4%, 96.4%]) was higher than the accuracy of auscultation (67.4% [52.0%, 80.5%]) (P < .01). When LDLTs were used in the lateral position, the accuracy of ultrasound (75.4% [62.2%, 85.9%]) was higher than the accuracy of auscultation (54.4% [40.7%, 67.6%]) (P < .05). When RDLT were used, the accuracy of ultrasound (73.9% [58.9%, 85.7%]) was higher than the accuracy of auscultation (47.8% [32.9%, 63.1%]) (P < .05).Assessment via point-of-care ultrasound is superior to auscultation in determining the position of DLTs.
Neural Imaging Using Single-Photon Avalanche Diodes
Karami, Mohammad Azim; Ansarian, Misagh
2017-01-01
Introduction: This paper analyses the ability of single-photon avalanche diodes (SPADs) for neural imaging. The current trend in the production of SPADs moves toward the minimum dark count rate (DCR) and maximum photon detection probability (PDP). Moreover, the jitter response which is the main measurement characteristic for the timing uncertainty is progressing. Methods: The neural imaging process using SPADs can be performed by means of florescence lifetime imaging (FLIM), time correlated single-photon counting (TCSPC), positron emission tomography (PET), and single-photon emission computed tomography (SPECT). Results: This trend will result in more precise neural imaging cameras. While achieving low DCR SPADs is difficult in deep submicron technologies because of using higher doping profiles, higher PDPs are reported in green and blue part of light. Furthermore, the number of pixels integrated in the same chip is increasing with the technology progress which can result in the higher resolution of imaging. Conclusion: This study proposes implemented SPADs in Deep-submicron technologies to be used in neural imaging cameras, due to the small size pixels and higher timing accuracies. PMID:28446946
Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation
NASA Astrophysics Data System (ADS)
Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.
2016-06-01
Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
Prediction of Potential Hit Song and Musical Genre Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Monterola, Christopher; Abundo, Cheryl; Tugaff, Jeric; Venturina, Lorcel Ericka
Accurately quantifying the goodness of music based on the seemingly subjective taste of the public is a multi-million industry. Recording companies can make sound decisions on which songs or artists to prioritize if accurate forecasting is achieved. We extract 56 single-valued musical features (e.g. pitch and tempo) from 380 Original Pilipino Music (OPM) songs (190 are hit songs) released from 2004 to 2006. Based on an effect size criterion which measures a variable's discriminating power, the 20 highest ranked features are fed to a classifier tasked to predict hit songs. We show that regardless of musical genre, a trained feed-forward neural network (NN) can predict potential hit songs with an average accuracy of ΦNN = 81%. The accuracy is about +20% higher than those of standard classifiers such as linear discriminant analysis (LDA, ΦLDA = 61%) and classification and regression trees (CART, ΦCART = 57%). Both LDA and CART are above the proportional chance criterion (PCC, ΦPCC = 50%) but are slightly below the suggested acceptable classifier requirement of 1.25*ΦPCC = 63%. Utilizing a similar procedure, we demonstrate that different genres (ballad, alternative rock or rock) of OPM songs can be automatically classified with near perfect accuracy using LDA or NN but only around 77% using CART.
Muscle categorization using PDF estimation and Naive Bayes classification.
Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W
2012-01-01
The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.
Building machine learning force fields for nanoclusters
NASA Astrophysics Data System (ADS)
Zeni, Claudio; Rossi, Kevin; Glielmo, Aldo; Fekete, Ádám; Gaston, Nicola; Baletto, Francesca; De Vita, Alessandro
2018-06-01
We assess Gaussian process (GP) regression as a technique to model interatomic forces in metal nanoclusters by analyzing the performance of 2-body, 3-body, and many-body kernel functions on a set of 19-atom Ni cluster structures. We find that 2-body GP kernels fail to provide faithful force estimates, despite succeeding in bulk Ni systems. However, both 3- and many-body kernels predict forces within an ˜0.1 eV/Å average error even for small training datasets and achieve high accuracy even on out-of-sample, high temperature structures. While training and testing on the same structure always provide satisfactory accuracy, cross-testing on dissimilar structures leads to higher prediction errors, posing an extrapolation problem. This can be cured using heterogeneous training on databases that contain more than one structure, which results in a good trade-off between versatility and overall accuracy. Starting from a 3-body kernel trained this way, we build an efficient non-parametric 3-body force field that allows accurate prediction of structural properties at finite temperatures, following a newly developed scheme [A. Glielmo et al., Phys. Rev. B 95, 214302 (2017)]. We use this to assess the thermal stability of Ni19 nanoclusters at a fractional cost of full ab initio calculations.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Weighted statistical parameters for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rimoldini, Lorenzo
2014-01-01
Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.
Hassfeld, S; Mühling, J
2000-12-01
The aim of an intraoperative instrument navigation system is to support the surgeon in the localization of anatomical regions and to guide the use of surgical instruments. An overview of technical principles and literature reports on various navigation systems is provided here. The navigation accuracy (tested on a plastic phantom under simulated operating room conditions) of the mechanical Viewing Wand system and the optical SPOCS system amounts to 1 to 3 mm for computerized tomography (CT) data, with a significant inverse dependence on the layer thickness. The values for magnetic resonance tomography (MRT) data are significantly higher. In regard to the choice of registration points, a statistically inverse dependence exists between the number of points and the distance between the points. During the time period between autumn 1993 and mid-1999, more than 120 clinical applications were performed. The intraoperative accuracy was in the range of < or = 3 mm. Registering the patient position with preoperatively inserted screw markers achieved accuracy values of < or = 2 mm. The instrument navigation technique has proved to be very advantageous for the spatial orientation of the surgeons. The possibility of checking resection borders has opened up new perspectives in tumor surgery. A quality improvement and a reduction of the operational risks as well as a considerable decline in the stress placed on the patient can be expected in the near future due the techniques of computer-assisted surgery.
Accelerated Fractional Ventilation Imaging with Hyperpolarized Gas MRI
Emami, Kiarash; Xu, Yinan; Hamedani, Hooman; Profka, Harrilla; Kadlecek, Stephen; Xin, Yi; Ishii, Masaru; Rizi, Rahim R.
2013-01-01
PURPOSE To investigate the utility of accelerated imaging to enhance multi-breath fractional ventilation (r) measurement accuracy using HP gas MRI. Undersampling shortens the breath-hold time, thereby reducing the O2-induced signal decay and allows subjects to maintain a more physiologically relevant breathing pattern. Additionally it may improve r estimation accuracy by reducing RF destruction of HP gas. METHODS Image acceleration was achieved by using an 8-channel phased array coil. Undersampled image acquisition was simulated in a series of ventilation images and images were reconstructed for various matrix sizes (48–128) using GRAPPA. Parallel accelerated r imaging was also performed on five mechanically ventilated pigs. RESULTS Optimal acceleration factor was fairly invariable (2.0–2.2×) over the range of simulated resolutions. Estimation accuracy progressively improved with higher resolutions (39–51% error reduction). In vivo r values were not significantly different between the two methods: 0.27±0.09, 0.35±0.06, 0.40±0.04 (standard) versus 0.23±0.05, 0.34±0.03, 0.37±0.02 (accelerated); for anterior, medial and posterior slices, respectively, whereas the corresponding vertical r gradients were significant (P < 0.001): 0.021±0.007 (standard) versus 0.019±0.005 (accelerated) [cm−1]. CONCLUSION Quadruple phased array coil simulations resulted in an optimal acceleration factor of ~2× independent of imaging resolution. Results advocate undersampled image acceleration to improve accuracy of fractional ventilation measurement with HP gas MRI. PMID:23400938
Masdrakis, Vasilios G; Legaki, Emilia-Maria; Vaidakis, Nikolaos; Ploumpidis, Dimitrios; Soldatos, Constantin R; Papageorgiou, Charalambos; Papadimitriou, George N; Oulis, Panagiotis
2015-07-01
Increased heartbeat perception accuracy (HBP-accuracy) may contribute to the pathogenesis of Panic Disorder (PD) without or with Agoraphobia (PDA). Extant research suggests that HBP-accuracy is a rather stable individual characteristic, moreover predictive of worse long-term outcome in PD/PDA patients. However, it remains still unexplored whether HBP-accuracy adversely affects patients' short-term outcome after structured cognitive behaviour therapy (CBT) for PD/PDA. To explore the potential association between HBP-accuracy and the short-term outcome of a structured brief-CBT for the acute treatment of PDA. We assessed baseline HBP-accuracy using the "mental tracking" paradigm in 25 consecutive medication-free, CBT-naive PDA patients. Patients then underwent a structured, protocol-based, 8-session CBT by the same therapist. Outcome measures included the number of panic attacks during the past week, the Agoraphobic Cognitions Questionnaire (ACQ), and the Mobility Inventory-Alone subscale (MI-alone). No association emerged between baseline HBP-accuracy and posttreatment changes concerning number of panic attacks. Moreover, higher baseline HBP-accuracy was associated with significantly larger reductions in the scores of the ACQ and the MI-alone scales. Our results suggest that in PDA patients undergoing structured brief-CBT for the acute treatment of their symptoms, higher baseline HBP-accuracy is not associated with worse short-term outcome concerning panic attacks. Furthermore, higher baseline HBP-accuracy may be associated with enhanced therapeutic gains in agoraphobic cognitions and behaviours.
The Enigmatic Cornea and Intraocular Lens Calculations: The LXXIII Edward Jackson Memorial Lecture.
Koch, Douglas D
2016-11-01
To review the progress and challenges in obtaining accurate corneal power measurements for intraocular lens (IOL) calculations. Personal perspective, review of literature, case presentations, and personal data. Through literature review findings, case presentations, and data from the author's center, the types of corneal measurement errors that can occur in IOL calculation are categorized and described, along with discussion of future options to improve accuracy. Advances in IOL calculation technology and formulas have greatly increased the accuracy of IOL calculations. Recent reports suggest that over 90% of normal eyes implanted with IOLs may achieve accuracy to within 0.5 diopter (D) of the refractive target. Though errors in estimation of corneal power can cause IOL calculation errors in eyes with normal corneas, greater difficulties in measuring corneal power are encountered in eyes with diseased, scarred, and postsurgical corneas. For these corneas, problematic issues are quantifying anterior corneal power and measuring posterior corneal power and astigmatism. Results in these eyes are improving, but 2 examples illustrate current limitations: (1) spherical accuracy within 0.5 D is achieved in only 70% of eyes with post-refractive surgery corneas, and (2) astigmatism accuracy within 0.5 D is achieved in only 80% of eyes implanted with toric IOLs. Corneal power measurements are a major source of error in IOL calculations. New corneal imaging technology and IOL calculation formulas have improved outcomes and hold the promise of ongoing progress. Copyright © 2016 Elsevier Inc. All rights reserved.
Optimisation of shape kernel and threshold in image-processing motion analysers.
Pedrocchi, A; Baroni, G; Sada, S; Marcon, E; Pedotti, A; Ferrigno, G
2001-09-01
The aim of the work is to optimise the image processing of a motion analyser. This is to improve accuracy, which is crucial for neurophysiological and rehabilitation applications. A new motion analyser, ELITE-S2, for installation on the International Space Station is described, with the focus on image processing. Important improvements are expected in the hardware of ELITE-S2 compared with ELITE and previous versions (ELITE-S and Kinelite). The core algorithm for marker recognition was based on the current ELITE version, using the cross-correlation technique. This technique was based on the matching of the expected marker shape, the so-called kernel, with image features. Optimisation of the kernel parameters was achieved using a genetic algorithm, taking into account noise rejection and accuracy. Optimisation was achieved by performing tests on six highly precise grids (with marker diameters ranging from 1.5 to 4 mm), representing all allowed marker image sizes, and on a noise image. The results of comparing the optimised kernels and the current ELITE version showed a great improvement in marker recognition accuracy, while noise rejection characteristics were preserved. An average increase in marker co-ordinate accuracy of +22% was achieved, corresponding to a mean accuracy of 0.11 pixel in comparison with 0.14 pixel, measured over all grids. An improvement of +37%, corresponding to an improvement from 0.22 pixel to 0.14 pixel, was observed over the grid with the biggest markers.
Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S
2018-05-01
OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.
NASA Technical Reports Server (NTRS)
Folkner, W. M.; Border, J. S.; Nandi, S.; Zukor, K. S.
1993-01-01
A new radio metric positioning technique has demonstrated improved orbit determination accuracy for the Magellan and Pioneer Venus Orbiter orbiters. The new technique, known as Same-Beam Interferometry (SBI), is applicable to the positioning of multiple planetary rovers, landers, and orbiters which may simultaneously be observed in the same beamwidth of Earth-based radio antennas. Measurements of carrier phase are differenced between spacecraft and between receiving stations to determine the plane-of-sky components of the separation vector(s) between the spacecraft. The SBI measurements complement the information contained in line-of-sight Doppler measurements, leading to improved orbit determination accuracy. Orbit determination solutions have been obtained for a number of 48-hour data arcs using combinations of Doppler, differenced-Doppler, and SBI data acquired in the spring of 1991. Orbit determination accuracy is assessed by comparing orbit solutions from adjacent data arcs. The orbit solution differences are shown to agree with expected orbit determination uncertainties. The results from this demonstration show that the orbit determination accuracy for Magellan obtained by using Doppler plus SBI data is better than the accuracy achieved using Doppler plus differenced-Doppler by a factor of four and better than the accuracy achieved using only Doppler by a factor of eighteen. The orbit determination accuracy for Pioneer Venus Orbiter using Doppler plus SBI data is better than the accuracy using only Doppler data by 30 percent.
Computer-aided diagnosis system: a Bayesian hybrid classification method.
Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J
2013-10-01
A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
Pakkala, T; Kuusela, L; Ekholm, M; Wenzel, A; Haiter-Neto, F; Kortesniemi, M
2012-01-01
In clinical practice, digital radiographs taken for caries diagnostics are viewed on varying types of displays and usually in relatively high ambient lighting (room illuminance) conditions. Our purpose was to assess the effect of room illuminance and varying display types on caries diagnostic accuracy in digital dental radiographs. Previous studies have shown that the diagnostic accuracy of caries detection is significantly better in reduced lighting conditions. Our hypothesis was that higher display luminance could compensate for this in higher ambient lighting conditions. Extracted human teeth with approximal surfaces clinically ranging from sound to demineralized were radiographed and evaluated by 3 observers who detected carious lesions on 3 different types of displays in 3 different room illuminance settings ranging from low illumination, i.e. what is recommended for diagnostic viewing, to higher illumination levels corresponding to those found in an average dental office. Sectioning and microscopy of the teeth validated the presence or absence of a carious lesion. Sensitivity, specificity and accuracy were calculated for each modality and observer. Differences were estimated by analyzing the binary data assuming the added effects of observer and modality in a generalized linear model. The observers obtained higher sensitivities in lower illuminance settings than in higher illuminance settings. However, this was related to a reduction in specificity, which meant that there was no significant difference in overall accuracy. Contrary to our hypothesis, there were no significant differences between the accuracy of different display types. Therefore, different displays and room illuminance levels did not affect the overall accuracy of radiographic caries detection. Copyright © 2012 S. Karger AG, Basel.
Practical vision based degraded text recognition system
NASA Astrophysics Data System (ADS)
Mohammad, Khader; Agaian, Sos; Saleh, Hani
2011-02-01
Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published techniques. The system successfully produced impressive OCR accuracies (90% -to- 93%) using customized systems generated by our development framework in two industrial OCR applications: water bottle label text recognition and concrete slab plate text recognition. The system was also trained for the Arabic language alphabet, and demonstrated extremely high recognition accuracy (99%) for Arabic license name plate text recognition with processing times of 10 seconds. The accuracy and run times of the system were compared to conventional and many states of art methods, the proposed system shows excellent results.
Improved Collaborative Filtering Algorithm via Information Transformation
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Wang, Bing-Hong; Guo, Qiang
In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering using the Pearson correlation. Furthermore, we introduce a free parameter β to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-N similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.
Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.
Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D
2016-07-01
Zebu () cattle, mostly of the Nellore breed, comprise more than 80% of the beef cattle in Brazil, given their tolerance of the tropical climate and high resistance to ectoparasites. Despite their advantages for production in tropical environments, zebu cattle tend to produce tougher meat than Bos taurus breeds. Traditional genetic selection to improve meat tenderness is constrained by the difficulty and cost of phenotypic evaluation for meat quality. Therefore, genomic selection may be the best strategy to improve meat quality traits. This study was performed to compare the accuracies of different Bayesian regression models in predicting molecular breeding values for meat tenderness in Polled Nellore cattle. The data set was composed of Warner-Bratzler shear force (WBSF) of longissimus muscle from 205, 141, and 81 animals slaughtered in 2005, 2010, and 2012, respectively, which were selected and mated so as to create extreme segregation for WBSF. The animals were genotyped with either the Illumina BovineHD (HD; 777,000 from 90 samples) chip or the GeneSeek Genomic Profiler (GGP Indicus HD; 77,000 from 337 samples). The quality controls of SNP were Hard-Weinberg Proportion -value ≥ 0.1%, minor allele frequency > 1%, and call rate > 90%. The FImpute program was used for imputation from the GGP Indicus HD chip to the HD chip. The effect of each SNP was estimated using ridge regression, least absolute shrinkage and selection operator (LASSO), Bayes A, Bayes B, and Bayes Cπ methods. Different numbers of SNP were used, with 1, 2, 3, 4, 5, 7, 10, 20, 40, 60, 80, or 100% of the markers preselected based on their significance test (-value from genomewide association studies [GWAS]) or randomly sampled. The prediction accuracy was assessed by the correlation between genomic breeding value and the observed WBSF phenotype, using a leave-one-out cross-validation methodology. The prediction accuracies using all markers were all very similar for all models, ranging from 0.22 (Bayes Cπ) to 0.25 (Bayes B). When preselecting SNP based on GWAS results, the highest correlation (0.27) between WBSF and the genomic breeding value was achieved using the Bayesian LASSO model with 15,030 (3%) markers. Although this study used relatively few animals, the design of the segregating population ensured wide genetic variability for meat tenderness, which was important to achieve acceptable accuracy of genomic prediction. Although all models showed similar levels of prediction accuracy, some small advantages were observed with the Bayes B approach when higher numbers of markers were preselected based on their -values resulting from a GWAS analysis.
Major Depression Detection from EEG Signals Using Kernel Eigen-Filter-Bank Common Spatial Patterns.
Liao, Shih-Cheng; Wu, Chien-Te; Huang, Hao-Chuan; Cheng, Wei-Teng; Liu, Yi-Hung
2017-06-14
Major depressive disorder (MDD) has become a leading contributor to the global burden of disease; however, there are currently no reliable biological markers or physiological measurements for efficiently and effectively dissecting the heterogeneity of MDD. Here we propose a novel method based on scalp electroencephalography (EEG) signals and a robust spectral-spatial EEG feature extractor called kernel eigen-filter-bank common spatial pattern (KEFB-CSP). The KEFB-CSP first filters the multi-channel raw EEG signals into a set of frequency sub-bands covering the range from theta to gamma bands, then spatially transforms the EEG signals of each sub-band from the original sensor space to a new space where the new signals (i.e., CSPs) are optimal for the classification between MDD and healthy controls, and finally applies the kernel principal component analysis (kernel PCA) to transform the vector containing the CSPs from all frequency sub-bands to a lower-dimensional feature vector called KEFB-CSP. Twelve patients with MDD and twelve healthy controls participated in this study, and from each participant we collected 54 resting-state EEGs of 6 s length (5 min and 24 s in total). Our results show that the proposed KEFB-CSP outperforms other EEG features including the powers of EEG frequency bands, and fractal dimension, which had been widely applied in previous EEG-based depression detection studies. The results also reveal that the 8 electrodes from the temporal areas gave higher accuracies than other scalp areas. The KEFB-CSP was able to achieve an average EEG classification accuracy of 81.23% in single-trial analysis when only the 8-electrode EEGs of the temporal area and a support vector machine (SVM) classifier were used. We also designed a voting-based leave-one-participant-out procedure to test the participant-independent individual classification accuracy. The voting-based results show that the mean classification accuracy of about 80% can be achieved by the KEFP-CSP feature and the SVM classifier with only several trials, and this level of accuracy seems to become stable as more trials (i.e., <7 trials) are used. These findings therefore suggest that the proposed method has a great potential for developing an efficient (required only a few 6-s EEG signals from the 8 electrodes over the temporal) and effective (~80% classification accuracy) EEG-based brain-computer interface (BCI) system which may, in the future, help psychiatrists provide individualized and effective treatments for MDD patients.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
Acquiring Research-grade ALSM Data in the Commercial Marketplace
NASA Astrophysics Data System (ADS)
Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.
2003-12-01
The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2014-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; ...
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt 1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering ifmore » and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆
Cao, Houwei; Verma, Ragini; Nenkova, Ani
2015-01-01
We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.
Pireau, Nathalie; Cordemans, Virginie; Banse, Xavier; Irda, Nadia; Lichtherte, Sébastien; Kaminski, Ludovic
2017-11-01
Spine surgery still remains a challenge for every spine surgeon, aware of the potential serious outcomes of misplaced instrumentation. Though many studies have highlighted that using intraoperative cone beam CT imaging and navigation systems provides higher accuracy than conventional freehand methods for placement of pedicle screws in spine surgery, few studies are concerned about how to reduce radiation exposure for patients with the use of such technology. One of the main focuses of this study is based on the ALARA principle (as low as reasonably achievable). A prospective randomized trial was conducted in the hybrid operating room between December 2015 and December 2016, including 50 patients operated on for posterior instrumented thoracic and/or lumbar spinal fusion. Patients were randomized to intraoperative 3D acquisition high-dose (standard dose) or low-dose protocol, and a total of 216 pedicle screws were analyzed in terms of screw position. Two different methods were used to measure ionizing radiation: the total skin dose (derived from the dose-area product) and the radiation dose evaluated by thermoluminescent dosimeters on the surgical field. According to Gertzbein and Heary classifications, low-dose protocol provided a significant higher accuracy of pedicle screw placement than the high-dose protocol (96.1 versus 92%, respectively). Seven screws (3.2%), all implanted with the high-dose protocol, needed to be revised intraoperatively. The use of low-dose acquisition protocols reduced patient exposure by a factor of five. This study emphasizes the paramount importance of using low-dose protocols for intraoperative cone beam CT imaging coupled with the navigation system, as it at least does not affect the accuracy of pedicle screw placement and irradiates drastically less.
Maher, Toby M.; Kolb, Martin; Poletti, Venerino; Nusser, Richard; Richeldi, Luca; Vancheri, Carlo; Wilsher, Margaret L.; Antoniou, Katerina M.; Behr, Jüergen; Bendstrup, Elisabeth; Brown, Kevin; Calandriello, Lucio; Corte, Tamera J.; Crestani, Bruno; Flaherty, Kevin; Glaspole, Ian; Grutters, Jan; Inoue, Yoshikazu; Kokosi, Maria; Kondoh, Yasuhiro; Kouranos, Vasileios; Kreuter, Michael; Johannson, Kerri; Judge, Eoin; Ley, Brett; Margaritopoulos, George; Martinez, Fernando J.; Molina-Molina, Maria; Morais, António; Nunes, Hilario; Raghu, Ganesh; Ryerson, Christopher J.; Selman, Moises; Spagnolo, Paolo; Taniguchi, Hiroyuki; Tomassetti, Sara; Valeyre, Dominique; Wijsenbeek, Marlies; Wuyts, Wim; Hansell, David; Wells, Athol
2017-01-01
We conducted an international study of idiopathic pulmonary fibrosis (IPF) diagnosis among a large group of physicians and compared their diagnostic performance to a panel of IPF experts. A total of 1141 respiratory physicians and 34 IPF experts participated. Participants evaluated 60 cases of interstitial lung disease (ILD) without interdisciplinary consultation. Diagnostic agreement was measured using the weighted kappa coefficient (κw). Prognostic discrimination between IPF and other ILDs was used to validate diagnostic accuracy for first-choice diagnoses of IPF and were compared using the C-index. A total of 404 physicians completed the study. Agreement for IPF diagnosis was higher among expert physicians (κw=0.65, IQR 0.53–0.72, p<0.0001) than academic physicians (κw=0.56, IQR 0.45–0.65, p<0.0001) or physicians with access to multidisciplinary team (MDT) meetings (κw=0.54, IQR 0.45–0.64, p<0.0001). The prognostic accuracy of academic physicians with >20 years of experience (C-index=0.72, IQR 0.0–0.73, p=0.229) and non-university hospital physicians with more than 20 years of experience, attending weekly MDT meetings (C-index=0.72, IQR 0.70–0.72, p=0.052), did not differ significantly (p=0.229 and p=0.052 respectively) from the expert panel (C-index=0.74 IQR 0.72–0.75). Experienced respiratory physicians at university-based institutions diagnose IPF with similar prognostic accuracy to IPF experts. Regular MDT meeting attendance improves the prognostic accuracy of experienced non-university practitioners to levels achieved by IPF experts. PMID:28860269
Smith, Albert F.; Baxter, Suzanne Domel; Hitchcock, David B.; Finney, Christopher J.; Royer, Julie A.; Guinn, Caroline H.
2016-01-01
Objectives To investigate the relationship of reporting accuracy in 24-h dietary recalls to child respondent characteristics—cognitive ability, social desirability, body mass index (BMI) percentile, and socioeconomic status (SES). Subjects/Methods Fourth-grade children (mean age 10.1 years) were observed eating two school meals and interviewed about dietary intake for 24-h that included those meals. (Eight multiple-pass interview protocols operationalized the conditions of an experiment that crossed two retention intervals—short and long—with four prompts [ways of eliciting reports in the first pass].) Academic achievement test scores indexed cognitive ability; social desirability was assessed by questionnaire; height and weight were measured to calculate BMI; nutrition-assistance program eligibility information was obtained to index SES. Reported intake was compared to observed intake to calculate measures of reporting accuracy for school meals at the food-item (omission rate; intrusion rate) and energy (correspondence rate; inflation ratio) levels. Complete data were available for 425 of 480 validation-study participants. Results Controlling for manipulated variables and other measured respondent characteristics, for one or more of the outcome variables, reporting accuracy increased with cognitive ability (omission rate, intrusion rate, correspondence rate, P < .001); decreased with social desirability (correspondence rate, P < .0004); decreased with BMI percentile (correspondence rate, P = .001), and was better by higher than by lower SES children (intrusion rate, P = .001). Some of these effects were moderated by interactions with retention interval and sex. Conclusions Children’s dietary-reporting accuracy is systematically related to such respondent characteristics as cognitive ability, social desirability, BMI percentile, and SES. PMID:27222153
NASA Astrophysics Data System (ADS)
Roychowdhury, K.
2016-06-01
Landcover is the easiest detectable indicator of human interventions on land. Urban and peri-urban areas present a complex combination of landcover, which makes classification challenging. This paper assesses the different methods of classifying landcover using dual polarimetric Sentinel-1 data collected during monsoon (July) and winter (December) months of 2015. Four broad landcover classes such as built up areas, water bodies and wetlands, vegetation and open spaces of Kolkata and its surrounding regions were identified. Polarimetric analyses were conducted on Single Look Complex (SLC) data of the region while ground range detected (GRD) data were used for spectral and spatial classification. Unsupervised classification by means of K-Means clustering used backscatter values and was able to identify homogenous landcovers over the study area. The results produced an overall accuracy of less than 50% for both the seasons. Higher classification accuracy (around 70%) was achieved by adding texture variables as inputs along with the backscatter values. However, the accuracy of classification increased significantly with polarimetric analyses. The overall accuracy was around 80% in Wishart H-A-Alpha unsupervised classification. The method was useful in identifying urban areas due to their double-bounce scattering and vegetated areas, which have more random scattering. Normalized Difference Built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) obtained from Landsat 8 data over the study area were used to verify vegetation and urban classes. The study compares the accuracies of different methods of classifying landcover using medium resolution SAR data in a complex urban area and suggests that polarimetric analyses present the most accurate results for urban and suburban areas.
Fast detection of covert visuospatial attention using hybrid N2pc and SSVEP features
NASA Astrophysics Data System (ADS)
Xu, Minpeng; Wang, Yijun; Nakanishi, Masaki; Wang, Yu-Te; Qi, Hongzhi; Jung, Tzyy-Ping; Ming, Dong
2016-12-01
Objective. Detecting the shift of covert visuospatial attention (CVSA) is vital for gaze-independent brain-computer interfaces (BCIs), which might be the only communication approach for severely disabled patients who cannot move their eyes. Although previous studies had demonstrated that it is feasible to use CVSA-related electroencephalography (EEG) features to control a BCI system, the communication speed remains very low. This study aims to improve the speed and accuracy of CVSA detection by fusing EEG features of N2pc and steady-state visual evoked potential (SSVEP). Approach. A new paradigm was designed to code the left and right CVSA with the N2pc and SSVEP features, which were then decoded by a classification strategy based on canonical correlation analysis. Eleven subjects were recruited to perform an offline experiment in this study. Temporal waves, amplitudes, and topographies for brain responses related to N2pc and SSVEP were analyzed. The classification accuracy derived from the hybrid EEG features (SSVEP and N2pc) was compared with those using the single EEG features (SSVEP or N2pc). Main results. The N2pc could be significantly enhanced under certain conditions of SSVEP modulations. The hybrid EEG features achieved significantly higher accuracy than the single features. It obtained an average accuracy of 72.9% by using a data length of 400 ms after the attention shift. Moreover, the average accuracy reached ˜80% (peak values above 90%) when using 2 s long data. Significance. The results indicate that the combination of N2pc and SSVEP is effective for fast detection of CVSA. The proposed method could be a promising approach for implementing a gaze-independent BCI.
Linkage disequilibrium among commonly genotyped SNP and variants detected from bull sequence
USDA-ARS?s Scientific Manuscript database
Genomic prediction utilizing causal variants could increase selection accuracy above that achieved with SNP genotyped by commercial assays. A number of variants detected from sequencing influential sires are likely to be causal, but noticable improvements in prediction accuracy using imputed sequen...
Improving Speaking Accuracy through Awareness
ERIC Educational Resources Information Center
Dormer, Jan Edwards
2013-01-01
Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…
Evaluation of Relative Navigation Algorithms for Formation-Flying Satellites
NASA Technical Reports Server (NTRS)
Kelbel, David; Lee, Taesul; Long, Anne; Carpenter, J. Russell; Gramling, Cheryl
2001-01-01
Goddard Space Flight Center is currently developing advanced spacecraft systems to provide autonomous navigation and control of formation flyers. This paper discusses autonomous relative navigation performance for formations in eccentric, medium, and high-altitude Earth orbits using Global Positioning System (GPS) Standard Positioning Service (SPS) and intersatellite range measurements. The performance of several candidate relative navigation approaches is evaluated. These analyses indicate that the relative navigation accuracy is primarily a function of the frequency of acquisition and tracking of the GPS signals. A relative navigation position accuracy of 0.5 meters root-mean-square (RMS) can be achieved for formations in medium-attitude eccentric orbits that can continuously track at least one GPS signal. A relative navigation position accuracy of better than 75 meters RMS can be achieved for formations in high-altitude eccentric orbits that have sparse tracking of the GPS signals. The addition of round-trip intersatellite range measurements can significantly improve relative navigation accuracy for formations with sparse tracking of the GPS signals.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
Experimental studies of high-accuracy RFID localization with channel impairments
NASA Astrophysics Data System (ADS)
Pauls, Eric; Zhang, Yimin D.
2015-05-01
Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.
Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration
Deng, Mingjun; Li, Jiansong
2017-01-01
The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675
COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy
NASA Astrophysics Data System (ADS)
Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun
2014-09-01
In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.
Multiple scene attitude estimator performance for LANDSAT-1
NASA Technical Reports Server (NTRS)
Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.
1979-01-01
Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.
Mekid, Samir; Vacharanukul, Ketsaya
2006-01-01
To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.
Can Jupiters be found by monitoring Galactic bulge microlensing events from northern sites?
NASA Astrophysics Data System (ADS)
Tsapras, Yiannis; Street, Rachel A.; Horne, Keith; Penny, Alan; Clarke, Fraser; Deeg, Hans; Garzon, Francisco; Kemp, Simon; Zapatero Osorio, Maria Rosa; Oscoz, Alejandro Abad; Sanchez, Santiago Madruga; Eiroa, Carlos; Mora, Alcione; Alberdi, Antxon; Collier Cameron, Andrew; Davies, John K.; Ferlet, Roger; Grady, Carol; Harris, Allan W.; Palacios, Javier; Quirrenbach, Andreas; Rauer, Heike; Schneider, Jean; de Winter, Dolf; Merin, Bruno; Solano, Enrique
2001-08-01
In 1998 the EXPORT team monitored microlensing event light curves using a charge-coupled device (CCD) camera on the IACQ4 0.8-m telescope on Tenerife to evaluate the prospect of using northern telescopes to find microlens anomalies that reveal planets orbiting the lens stars. The high airmass and more limited time available for observations of Galactic bulge sources make a northern site less favourable for microlensing planet searches. However, there are potentially a large number of northern 1-m class telescopes that could devote a few hours per night to monitor ongoing microlensing events. Our IAC observations indicate that accuracies sufficient to detect planets can be achieved despite the higher airmass.
Efficient Unsteady Flow Visualization with High-Order Access Dependencies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less
NASA Technical Reports Server (NTRS)
Hung, J. C.
1980-01-01
The pointing control of a microwave antenna of the Satellite Power System was investigated emphasizing: (1) the SPS antenna pointing error sensing method; (2) a rigid body pointing control design; and (3) approaches for modeling the flexible body characteristics of the solar collector. Accuracy requirements for the antenna pointing control consist of a mechanical pointing control accuracy of three arc-minutes and an electronic phased array pointing accuracy of three arc-seconds. Results based on the factors considered in current analysis, show that the three arc-minute overall pointing control accuracy can be achieved in practice.
Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons
Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser
2016-01-01
Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target’s location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment. PMID:27128917
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
2004-01-01
A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.
Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons.
Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser
2016-04-26
Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target's location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment.
Gemignani, Jessica; Middell, Eike; Barbour, Randall L; Graber, Harry L; Blankertz, Benjamin
2018-04-04
The statistical analysis of functional near infrared spectroscopy (fNIRS) data based on the general linear model (GLM) is often made difficult by serial correlations, high inter-subject variability of the hemodynamic response, and the presence of motion artifacts. In this work we propose to extract information on the pattern of hemodynamic activations without using any a priori model for the data, by classifying the channels as 'active' or 'not active' with a multivariate classifier based on linear discriminant analysis (LDA). This work is developed in two steps. First we compared the performance of the two analyses, using a synthetic approach in which simulated hemodynamic activations were combined with either simulated or real resting-state fNIRS data. This procedure allowed for exact quantification of the classification accuracies of GLM and LDA. In the case of real resting-state data, the correlations between classification accuracy and demographic characteristics were investigated by means of a Linear Mixed Model. In the second step, to further characterize the reliability of the newly proposed analysis method, we conducted an experiment in which participants had to perform a simple motor task and data were analyzed with the LDA-based classifier as well as with the standard GLM analysis. The results of the simulation study show that the LDA-based method achieves higher classification accuracies than the GLM analysis, and that the LDA results are more uniform across different subjects and, in contrast to the accuracies achieved by the GLM analysis, have no significant correlations with any of the demographic characteristics. Findings from the real-data experiment are consistent with the results of the real-plus-simulation study, in that the GLM-analysis results show greater inter-subject variability than do the corresponding LDA results. The results obtained suggest that the outcome of GLM analysis is highly vulnerable to violations of theoretical assumptions, and that therefore a data-driven approach such as that provided by the proposed LDA-based method is to be favored.
Estimation of different data compositions for early-season crop type classification.
Hao, Pengyu; Wu, Mingquan; Niu, Zheng; Wang, Li; Zhan, Yulin
2018-01-01
Timely and accurate crop type distribution maps are an important inputs for crop yield estimation and production forecasting as multi-temporal images can observe phenological differences among crops. Therefore, time series remote sensing data are essential for crop type mapping, and image composition has commonly been used to improve the quality of the image time series. However, the optimal composition period is unclear as long composition periods (such as compositions lasting half a year) are less informative and short composition periods lead to information redundancy and missing pixels. In this study, we initially acquired daily 30 m Normalized Difference Vegetation Index (NDVI) time series by fusing MODIS, Landsat, Gaofen and Huanjing (HJ) NDVI, and then composited the NDVI time series using four strategies (daily, 8-day, 16-day, and 32-day). We used Random Forest to identify crop types and evaluated the classification performances of the NDVI time series generated from four composition strategies in two studies regions from Xinjiang, China. Results indicated that crop classification performance improved as crop separabilities and classification accuracies increased, and classification uncertainties dropped in the green-up stage of the crops. When using daily NDVI time series, overall accuracies saturated at 113-day and 116-day in Bole and Luntai, and the saturated overall accuracies (OAs) were 86.13% and 91.89%, respectively. Cotton could be identified 40∼60 days and 35∼45 days earlier than the harvest in Bole and Luntai when using daily, 8-day and 16-day composition NDVI time series since both producer's accuracies (PAs) and user's accuracies (UAs) were higher than 85%. Among the four compositions, the daily NDVI time series generated the highest classification accuracies. Although the 8-day, 16-day and 32-day compositions had similar saturated overall accuracies (around 85% in Bole and 83% in Luntai), the 8-day and 16-day compositions achieved these accuracies around 155-day in Bole and 133-day in Luntai, which were earlier than the 32-day composition (170-day in both Bole and Luntai). Therefore, when the daily NDVI time series cannot be acquired, the 16-day composition is recommended in this study.
Estimation of different data compositions for early-season crop type classification
Wu, Mingquan; Wang, Li; Zhan, Yulin
2018-01-01
Timely and accurate crop type distribution maps are an important inputs for crop yield estimation and production forecasting as multi-temporal images can observe phenological differences among crops. Therefore, time series remote sensing data are essential for crop type mapping, and image composition has commonly been used to improve the quality of the image time series. However, the optimal composition period is unclear as long composition periods (such as compositions lasting half a year) are less informative and short composition periods lead to information redundancy and missing pixels. In this study, we initially acquired daily 30 m Normalized Difference Vegetation Index (NDVI) time series by fusing MODIS, Landsat, Gaofen and Huanjing (HJ) NDVI, and then composited the NDVI time series using four strategies (daily, 8-day, 16-day, and 32-day). We used Random Forest to identify crop types and evaluated the classification performances of the NDVI time series generated from four composition strategies in two studies regions from Xinjiang, China. Results indicated that crop classification performance improved as crop separabilities and classification accuracies increased, and classification uncertainties dropped in the green-up stage of the crops. When using daily NDVI time series, overall accuracies saturated at 113-day and 116-day in Bole and Luntai, and the saturated overall accuracies (OAs) were 86.13% and 91.89%, respectively. Cotton could be identified 40∼60 days and 35∼45 days earlier than the harvest in Bole and Luntai when using daily, 8-day and 16-day composition NDVI time series since both producer’s accuracies (PAs) and user’s accuracies (UAs) were higher than 85%. Among the four compositions, the daily NDVI time series generated the highest classification accuracies. Although the 8-day, 16-day and 32-day compositions had similar saturated overall accuracies (around 85% in Bole and 83% in Luntai), the 8-day and 16-day compositions achieved these accuracies around 155-day in Bole and 133-day in Luntai, which were earlier than the 32-day composition (170-day in both Bole and Luntai). Therefore, when the daily NDVI time series cannot be acquired, the 16-day composition is recommended in this study. PMID:29868265
Agreement and accuracy using the FIGO, ACOG and NICE cardiotocography interpretation guidelines.
Santo, Susana; Ayres-de-Campos, Diogo; Costa-Santos, Cristina; Schnettler, William; Ugwumadu, Austin; Da Graça, Luís M
2017-02-01
One of the limitations reported with cardiotocography is the modest interobserver agreement observed in tracing interpretation. This study compared agreement, reliability and accuracy of cardiotocography interpretation using the International Federation of Gynecology and Obstetrics, American College of Obstetrics and Gynecology and National Institute for Health and Care Excellence guidelines. A total of 151 tracings were evaluated by 27 clinicians from three centers where International Federation of Gynecology and Obstetrics, American College of Obstetrics and Gynecology and National Institute for Health and Care Excellence guidelines were routinely used. Interobserver agreement was evaluated using the proportions of agreement and reliability with the κ statistic. The accuracy of tracings classified as "pathological/category III" was assessed for prediction of newborn acidemia. For all measures, 95% confidence interval were calculated. Cardiotocography classifications were more distributed with International Federation of Gynecology and Obstetrics (9, 52, 39%) and National Institute for Health and Care Excellence (30, 33, 37%) than with American College of Obstetrics and Gynecology (13, 81, 6%). The category with the highest agreement was American College of Obstetrics and Gynecology category II (proportions of agreement = 0.73, 95% confidence interval 0.70-76), and the ones with the lowest agreement were American College of Obstetrics and Gynecology categories I and III. Reliability was significantly higher with International Federation of Gynecology and Obstetrics (κ = 0.37, 95% confidence interval 0.31-0.43), and National Institute for Health and Care Excellence (κ = 0.33, 95% confidence interval 0.28-0.39) than with American College of Obstetrics and Gynecology (κ = 0.15, 95% confidence interval 0.10-0.21); however, all represent only slight/fair reliability. International Federation of Gynecology and Obstetrics and National Institute for Health and Care Excellence showed a trend towards higher sensitivities in prediction of newborn acidemia (89 and 97%, respectively) than American College of Obstetrics and Gynecology (32%), but the latter achieved a significantly higher specificity (95%). With American College of Obstetrics and Gynecology guidelines there is high agreement in category II, low reliability, low sensitivity and high specificity in prediction of acidemia. With International Federation of Gynecology and Obstetrics and National Institute for Health and Care Excellence guidelines there is higher reliability, a trend towards higher sensitivity, and lower specificity in prediction of acidemia. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.
Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.
Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G
2012-11-13
The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.
Improving IMES Localization Accuracy by Integrating Dead Reckoning Information
Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki
2016-01-01
Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492
Real-time, resource-constrained object classification on a micro-air vehicle
NASA Astrophysics Data System (ADS)
Buck, Louis; Ray, Laura
2013-12-01
A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.
NASA Astrophysics Data System (ADS)
Stock, M.; Lapierre, J. L.; Zhu, Y.
2017-12-01
Recently, the Geostationary Lightning Mapper (GLM) began collecting optical data to locate lightning events and flashes over the North and South American continents. This new instrument promises uniformly high detection efficiency (DE) over its entire field of view, with location accuracy on the order of 10 km. In comparison, Earth Networks Total Lightning Networks (ENTLN) has a less uniform coverage, with higher DE in regions with dense sensor coverage, and lower DE with sparse sensor coverage. ENTLN also offers better location accuracy, lightning classification, and peak current estimation for their lightning locations. It is desirable to produce an integrated dataset, combining the strong points of GLM and ENTLN. The easiest way to achieve this is to simply match located lightning processes from each system using time and distance criteria. This simple method will be limited in scope by the uneven coverage of the ground based network. Instead, we will use GLM group locations to look up the electric field change data recorded by ground sensors near each GLM group, vastly increasing the coverage of the ground network. The ground waveforms can then be used for: improvements to differentiation between glint and lightning for GLM, higher precision lighting location, current estimation, and lightning process classification. Presented is an initial implementation of this type of integration using preliminary GLM data, and waveforms from ENTLN.
LDA boost classification: boosting by topics
NASA Astrophysics Data System (ADS)
Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li
2012-12-01
AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.
NASA Astrophysics Data System (ADS)
Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen
2015-01-01
Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.
Prospects for higher spatial resolution quantitative X-ray analysis using transition element L-lines
NASA Astrophysics Data System (ADS)
Statham, P.; Holland, J.
2014-03-01
Lowering electron beam kV reduces electron scattering and improves spatial resolution of X-ray analysis. However, a previous round robin analysis of steels at 5 - 6 kV using Lα-lines for the first row transition elements gave poor accuracies. Our experiments on SS63 steel using Lα-lines show similar biases in Cr and Ni that cannot be corrected with changes to self-absorption coefficients or carbon coating. The inaccuracy may be caused by different probabilities for emission and anomalous self-absorption for the La-line between specimen and pure element standard. Analysis using Ll(L3-M1)-lines gives more accurate results for SS63 plausibly because the M1-shell is not so vulnerable to the atomic environment as the unfilled M4,5-shell. However, Ll-intensities are very weak and WDS analysis may be impractical for some applications. EDS with large area SDD offers orders of magnitude faster analysis and achieves similar results to WDS analysis with Lα-lines but poorer energy resolution precludes the use of Ll-lines in most situations. EDS analysis of K-lines at low overvoltage is an alternative strategy for improving spatial resolution that could give higher accuracy. The trade-off between low kV versus low overvoltage is explored in terms of sensitivity for element detection for different elements.
NASA Astrophysics Data System (ADS)
Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu
2017-05-01
In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.
Scale and the evolutionarily based approximate number system: an exploratory study
NASA Astrophysics Data System (ADS)
Delgado, Cesar; Jones, M. Gail; You, Hye Sun; Robertson, Laura; Chesnutt, Katherine; Halberda, Justin
2017-05-01
Crosscutting concepts such as scale, proportion, and quantity are recognised by U.S. science standards as a potential vehicle for students to integrate their scientific and mathematical knowledge; yet, U.S. students and adults trail their international peers in scale and measurement estimation. Culturally based knowledge of scale such as measurement units may be built on evolutionarily-based systems of number such as the approximate number system (ANS), which processes approximate representations of numerical magnitude. ANS is related to mathematical achievement in pre-school and early elementary students, but there is little research on ANS among older students or in science-related areas such as scale. Here, we investigate the relationship between ANS precision in public school U.S. seventh graders and their accuracy estimating the length of standard units of measurement in SI and U.S. customary units. We also explored the relationship between ANS and science and mathematics achievement. Accuracy estimating the metre was positively and significantly related to ANS precision. Mathematics achievement, science achievement, and accuracy estimating other units were not significantly related to ANS. We thus suggest that ANS precision may be related to mathematics understanding beyond arithmetic, beyond the early school years, and to the crosscutting concepts of scale, proportion, and quantity.
Parkinson's disease detection based on dysphonia measurements
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Assessing dysphonic symptoms is a noninvasive and effective approach to detect Parkinson's disease (PD) in patients. The main purpose of this study is to investigate the effect of different dysphonia measurements on PD detection by support vector machine (SVM). Seven categories of dysphonia measurements are considered. Experimental results from ten-fold cross-validation technique demonstrate that vocal fundamental frequency statistics yield the highest accuracy of 88 % ± 0.04. When all dysphonia measurements are employed, the SVM classifier achieves 94 % ± 0.03 accuracy. A refinement of the original patterns space by removing dysphonia measurements with similar variation across healthy and PD subjects allows achieving 97.03 % ± 0.03 accuracy. The latter performance is larger than what is reported in the literature on the same dataset with ten-fold cross-validation technique. Finally, it was found that measures of ratio of noise to tonal components in the voice are the most suitable dysphonic symptoms to detect PD subjects as they achieve 99.64 % ± 0.01 specificity. This finding is highly promising for understanding PD symptoms.
Image-processing algorithms for inspecting characteristics of hybrid rice seed
NASA Astrophysics Data System (ADS)
Cheng, Fang; Ying, Yibin
2004-03-01
Incompletely closed glumes, germ and disease are three characteristics of hybrid rice seed. Image-processing algorithms developed to detect these seed characteristics were presented in this paper. The rice seed used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and IIyou. The algorithms were implemented with a 5*600 images set, a 4*400 images set and the other 5*600 images set respectively. The image sets included black background images, white background images and both sides images of rice seed. Results show that the algorithm for inspecting seeds with incompletely closed glumes based on Radon Transform achieved an accuracy of 96% for normal seeds, 92% for seeds with fine fissure and 87% for seeds with unclosed glumes, the algorithm for inspecting germinated seeds on panicle based on PCA and ANN achieved n average accuracy of 98% for normal seeds, 88% for germinated seeds on panicle and the algorithm for inspecting diseased seeds based on color features achieved an accuracy of 92% for normal and healthy seeds, 95% for spot diseased seeds and 83% for severe diseased seeds.
Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.
Chen, Shizhi; Yang, Xiaodong; Tian, Yingli
2015-09-01
A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.
Zheng, Qi; Grice, Elizabeth A
2016-10-01
Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost's algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
Yu, Jue; Zhuang, Jian; Yu, Dehong
2015-01-01
This paper concerns a state feedback integral control using a Lyapunov function approach for a rotary direct drive servo valve (RDDV) while considering parameter uncertainties. Modeling of this RDDV servovalve reveals that its mechanical performance is deeply influenced by friction torques and flow torques; however, these torques are uncertain and mutable due to the nature of fluid flow. To eliminate load resistance and to achieve satisfactory position responses, this paper develops a state feedback control that integrates an integral action and a Lyapunov function. The integral action is introduced to address the nonzero steady-state error; in particular, the Lyapunov function is employed to improve control robustness by adjusting the varying parameters within their value ranges. This new controller also has the advantages of simple structure and ease of implementation. Simulation and experimental results demonstrate that the proposed controller can achieve higher control accuracy and stronger robustness. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Finite Element Analysis in Concurrent Processing: Computational Issues
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett
2004-01-01
The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.
Unsupervised spike sorting based on discriminative subspace learning.
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-01-01
Spike sorting is a fundamental preprocessing step for many neuroscience studies which rely on the analysis of spike trains. In this paper, we present two unsupervised spike sorting algorithms based on discriminative subspace learning. The first algorithm simultaneously learns the discriminative feature subspace and performs clustering. It uses histogram of features in the most discriminative projection to detect the number of neurons. The second algorithm performs hierarchical divisive clustering that learns a discriminative 1-dimensional subspace for clustering in each level of the hierarchy until achieving almost unimodal distribution in the subspace. The algorithms are tested on synthetic and in-vivo data, and are compared against two widely used spike sorting methods. The comparative results demonstrate that our spike sorting methods can achieve substantially higher accuracy in lower dimensional feature space, and they are highly robust to noise. Moreover, they provide significantly better cluster separability in the learned subspace than in the subspace obtained by principal component analysis or wavelet transform.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Color image segmentation with support vector machines: applications to road signs detection.
Cyganek, Bogusław
2008-08-01
In this paper we propose efficient color segmentation method which is based on the Support Vector Machine classifier operating in a one-class mode. The method has been developed especially for the road signs recognition system, although it can be used in other applications. The main advantage of the proposed method comes from the fact that the segmentation of characteristic colors is performed not in the original but in the higher dimensional feature space. By this a better data encapsulation with a linear hypersphere can be usually achieved. Moreover, the classifier does not try to capture the whole distribution of the input data which is often difficult to achieve. Instead, the characteristic data samples, called support vectors, are selected which allow construction of the tightest hypersphere that encloses majority of the input data. Then classification of a test data simply consists in a measurement of its distance to a centre of the found hypersphere. The experimental results show high accuracy and speed of the proposed method.
Graf, M; Kaping, D; Bülthoff, H H
2005-03-01
How do observers recognize objects after spatial transformations? Recent neurocomputational models have proposed that object recognition is based on coordinate transformations that align memory and stimulus representations. If the recognition of a misoriented object is achieved by adjusting a coordinate system (or reference frame), then recognition should be facilitated when the object is preceded by a different object in the same orientation. In the two experiments reported here, two objects were presented in brief masked displays that were in close temporal contiguity; the objects were in either congruent or incongruent picture-plane orientations. Results showed that naming accuracy was higher for congruent than for incongruent orientations. The congruency effect was independent of superordinate category membership (Experiment 1) and was found for objects with different main axes of elongation (Experiment 2). The results indicate congruency effects for common familiar objects even when they have dissimilar shapes. These findings are compatible with models in which object recognition is achieved by an adjustment of a perceptual coordinate system.
Dy, Christopher J; Taylor, Samuel A; Patel, Ronak M; Kitay, Alison; Roberts, Timothy R; Daluiski, Aaron
2012-09-01
Recent emphasis on shared decision making and patient-centered research has increased the importance of patient education and health literacy. The internet is rapidly growing as a source of self-education for patients. However, concern exists over the quality, accuracy, and readability of the information. Our objective was to determine whether the quality, accuracy, and readability of information online about distal radius fractures vary with the search term. This was a prospective evaluation of 3 search engines using 3 different search terms of varying sophistication ("distal radius fracture," "wrist fracture," and "broken wrist"). We evaluated 70 unique Web sites for quality, accuracy, and readability. We used comparative statistics to determine whether the search term affected the quality, accuracy, and readability of the Web sites found. Three orthopedic surgeons independently gauged quality and accuracy of information using a set of predetermined scoring criteria. We evaluated the readability of the Web site using the Fleisch-Kincaid score for reading grade level. There were significant differences in the quality, accuracy, and readability of information found, depending on the search term. We found higher quality and accuracy resulted from the search term "distal radius fracture," particularly compared with Web sites resulting from the term "broken wrist." The reading level was higher than recommended in 65 of the 70 Web sites and was significantly higher when searching with "distal radius fracture" than "wrist fracture" or "broken wrist." There was no correlation between Web site reading level and quality or accuracy. The readability of information about distal radius fractures in most Web sites was higher than the recommended reading level for the general public. The quality and accuracy of the information found significantly varied with the sophistication of the search term used. Physicians, professional societies, and search engines should consider efforts to improve internet access to high-quality information at an understandable level. Copyright © 2012 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Tenório, Josceli Maria; Hummel, Anderson Diniz; Cohrs, Frederico Molina; Sdepanian, Vera Lucia; Pisa, Ivan Torres; de Fátima Marin, Heimar
2013-01-01
Background Celiac disease (CD) is a difficult-to-diagnose condition because of its multiple clinical presentations and symptoms shared with other diseases. Gold-standard diagnostic confirmation of suspected CD is achieved by biopsying the small intestine. Objective To develop a clinical decision–support system (CDSS) integrated with an automated classifier to recognize CD cases, by selecting from experimental models developed using intelligence artificial techniques. Methods A web-based system was designed for constructing a retrospective database that included 178 clinical cases for training. Tests were run on 270 automated classifiers available in Weka 3.6.1 using five artificial intelligence techniques, namely decision trees, Bayesian inference, k-nearest neighbor algorithm, support vector machines and artificial neural networks. The parameters evaluated were accuracy, sensitivity, specificity and area under the ROC curve (AUC). AUC was used as a criterion for selecting the CDSS algorithm. A testing database was constructed including 38 clinical CD cases for CDSS evaluation. The diagnoses suggested by CDSS were compared with those made by physicians during patient consultations. Results The most accurate method during the training phase was the averaged one-dependence estimator (AODE) algorithm (a Bayesian classifier), which showed accuracy 80.0%, sensitivity 0.78, specificity 0.80 and AUC 0.84. This classifier was integrated into the web-based decision–support system. The gold-standard validation of CDSS achieved accuracy of 84.2% and k = 0.68 (p < 0.0001) with good agreement. The same accuracy was achieved in the comparison between the physician’s diagnostic impression and the gold standard k = 0. 64 (p < 0.0001). There was moderate agreement between the physician’s diagnostic impression and CDSS k = 0.46 (p = 0.0008). Conclusions The study results suggest that CDSS could be used to help in diagnosing CD, since the algorithm tested achieved excellent accuracy in differentiating possible positive from negative CD diagnoses. This study may contribute towards developing of a computer-assisted environment to support CD diagnosis. PMID:21917512
Tenório, Josceli Maria; Hummel, Anderson Diniz; Cohrs, Frederico Molina; Sdepanian, Vera Lucia; Pisa, Ivan Torres; de Fátima Marin, Heimar
2011-11-01
Celiac disease (CD) is a difficult-to-diagnose condition because of its multiple clinical presentations and symptoms shared with other diseases. Gold-standard diagnostic confirmation of suspected CD is achieved by biopsying the small intestine. To develop a clinical decision-support system (CDSS) integrated with an automated classifier to recognize CD cases, by selecting from experimental models developed using intelligence artificial techniques. A web-based system was designed for constructing a retrospective database that included 178 clinical cases for training. Tests were run on 270 automated classifiers available in Weka 3.6.1 using five artificial intelligence techniques, namely decision trees, Bayesian inference, k-nearest neighbor algorithm, support vector machines and artificial neural networks. The parameters evaluated were accuracy, sensitivity, specificity and area under the ROC curve (AUC). AUC was used as a criterion for selecting the CDSS algorithm. A testing database was constructed including 38 clinical CD cases for CDSS evaluation. The diagnoses suggested by CDSS were compared with those made by physicians during patient consultations. The most accurate method during the training phase was the averaged one-dependence estimator (AODE) algorithm (a Bayesian classifier), which showed accuracy 80.0%, sensitivity 0.78, specificity 0.80 and AUC 0.84. This classifier was integrated into the web-based decision-support system. The gold-standard validation of CDSS achieved accuracy of 84.2% and k=0.68 (p<0.0001) with good agreement. The same accuracy was achieved in the comparison between the physician's diagnostic impression and the gold standard k=0. 64 (p<0.0001). There was moderate agreement between the physician's diagnostic impression and CDSS k=0.46 (p=0.0008). The study results suggest that CDSS could be used to help in diagnosing CD, since the algorithm tested achieved excellent accuracy in differentiating possible positive from negative CD diagnoses. This study may contribute towards developing of a computer-assisted environment to support CD diagnosis. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
An optical lattice clock with accuracy and stability at the 10(-18) level.
Bloom, B J; Nicholson, T L; Williams, J R; Campbell, S L; Bishof, M; Zhang, X; Zhang, W; Bromley, S L; Ye, J
2014-02-06
Progress in atomic, optical and quantum science has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks, their accuracy has remained 16 times worse. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10(-18), which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units, the search for time variation of fundamental constants, clock-based geodesy and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.
Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services
ERIC Educational Resources Information Center
Wang, Guoquan
2013-01-01
High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…
Training General Education Pupils to Monitor Reading Using Curriculum-Based Measurement Procedures.
ERIC Educational Resources Information Center
Bentz, Johnell; And Others
1990-01-01
Although systematic monitoring of student progress has been associated with improved achievement, few teachers engage in progress monitoring because of testing-time requirements. Compared accuracy of 14 trained fourth- and fifth-grade general education students' curriculum-based reading assessments of second and third graders to accuracy of…
Melanoma segmentation based on deep learning.
Zhang, Xiaoqing
2017-12-01
Malignant melanoma is one of the most deadly forms of skin cancer, which is one of the world's fastest-growing cancers. Early diagnosis and treatment is critical. In this study, a neural network structure is utilized to construct a broad and accurate basis for the diagnosis of skin cancer, thereby reducing screening errors. The technique is able to improve the efficacy for identification of normally indistinguishable lesions (such as pigment spots) versus clinically unknown lesions, and to ultimately improve the diagnostic accuracy. In the field of medical imaging, in general, using neural networks for image segmentation is relatively rare. The existing traditional machine-learning neural network algorithms still cannot completely solve the problem of information loss, nor detect the precise division of the boundary area. We use an improved neural network framework, described herein, to achieve efficacious feature learning, and satisfactory segmentation of melanoma images. The architecture of the network includes multiple convolution layers, dropout layers, softmax layers, multiple filters, and activation functions. The number of data sets can be increased via rotation of the training set. A non-linear activation function (such as ReLU and ELU) is employed to alleviate the problem of gradient disappearance, and RMSprop/Adam are incorporated to optimize the loss algorithm. A batch normalization layer is added between the convolution layer and the activation layer to solve the problem of gradient disappearance and explosion. Experiments, described herein, show that our improved neural network architecture achieves higher accuracy for segmentation of melanoma images as compared with existing processes.
Jia, Cangzhi; Lin, Xin; Wang, Zhiping
2014-06-10
Protein S-nitrosylation is a reversible post-translational modification by covalent modification on the thiol group of cysteine residues by nitric oxide. Growing evidence shows that protein S-nitrosylation plays an important role in normal cellular function as well as in various pathophysiologic conditions. Because of the inherent chemical instability of the S-NO bond and the low abundance of endogenous S-nitrosylated proteins, the unambiguous identification of S-nitrosylation sites by commonly used proteomic approaches remains challenging. Therefore, computational prediction of S-nitrosylation sites has been considered as a powerful auxiliary tool. In this work, we mainly adopted an adapted normal distribution bi-profile Bayes (ANBPB) feature extraction model to characterize the distinction of position-specific amino acids in 784 S-nitrosylated and 1568 non-S-nitrosylated peptide sequences. We developed a support vector machine prediction model, iSNO-ANBPB, by incorporating ANBPB with the Chou's pseudo amino acid composition. In jackknife cross-validation experiments, iSNO-ANBPB yielded an accuracy of 65.39% and a Matthew's correlation coefficient (MCC) of 0.3014. When tested on an independent dataset, iSNO-ANBPB achieved an accuracy of 63.41% and a MCC of 0.2984, which are much higher than the values achieved by the existing predictors SNOSite, iSNO-PseAAC, the Li et al. algorithm, and iSNO-AAPair. On another training dataset, iSNO-ANBPB also outperformed GPS-SNO and iSNO-PseAAC in the 10-fold crossvalidation test.
Schmitter, Daniel; Roche, Alexis; Maréchal, Bénédicte; Ribes, Delphine; Abdulkadir, Ahmed; Bach-Cuadra, Meritxell; Daducci, Alessandro; Granziera, Cristina; Klöppel, Stefan; Maeder, Philippe; Meuli, Reto; Krueger, Gunnar
2014-01-01
Voxel-based morphometry from conventional T1-weighted images has proved effective to quantify Alzheimer's disease (AD) related brain atrophy and to enable fairly accurate automated classification of AD patients, mild cognitive impaired patients (MCI) and elderly controls. Little is known, however, about the classification power of volume-based morphometry, where features of interest consist of a few brain structure volumes (e.g. hippocampi, lobes, ventricles) as opposed to hundreds of thousands of voxel-wise gray matter concentrations. In this work, we experimentally evaluate two distinct volume-based morphometry algorithms (FreeSurfer and an in-house algorithm called MorphoBox) for automatic disease classification on a standardized data set from the Alzheimer's Disease Neuroimaging Initiative. Results indicate that both algorithms achieve classification accuracy comparable to the conventional whole-brain voxel-based morphometry pipeline using SPM for AD vs elderly controls and MCI vs controls, and higher accuracy for classification of AD vs MCI and early vs late AD converters, thereby demonstrating the potential of volume-based morphometry to assist diagnosis of mild cognitive impairment and Alzheimer's disease. PMID:25429357
The design of high precision temperature control system for InGaAs short-wave infrared detector
NASA Astrophysics Data System (ADS)
Wang, Zheng-yun; Hu, Yadong; Ni, Chen; Huang, Lin; Zhang, Aiwen; Sun, Xiao-bing; Hong, Jin
2018-02-01
The InGaAs Short-wave infrared detector is a temperature-sensitive device. Accurate temperature control can effectively reduce the background signal and improve detection accuracy, detection sensitivity, and the SNR of the detection system. Firstly, the relationship between temperature and detection background, NEP is analyzed, the principle of TEC and formula between cooling power, cooling current and hot-cold interface temperature difference are introduced. Then, the high precision constant current drive circuit based on triode voltage control current, and an incremental algorithm model based on deviation tracking compensation and PID control are proposed, which effectively suppresses the temperature overshoot, overcomes the temperature inertia, and has strong robustness. Finally, the detector and temperature control system are tested. Results show that: the lower of detector temperature, the smaller the temperature fluctuation, the higher the detection accuracy and the detection sensitivity. The temperature control system achieves the high temperature control with the temperature control rate is 7 8°C/min and the temperature fluctuation is better than +/-0. 04°C.
A Dynamic Health Assessment Approach for Shearer Based on Artificial Immune Algorithm
Wang, Zhongbin; Xu, Xihua; Si, Lei; Ji, Rui; Liu, Xinhua; Tan, Chao
2016-01-01
In order to accurately identify the dynamic health of shearer, reducing operating trouble and production accident of shearer and improving coal production efficiency further, a dynamic health assessment approach for shearer based on artificial immune algorithm was proposed. The key technologies such as system framework, selecting the indicators for shearer dynamic health assessment, and health assessment model were provided, and the flowchart of the proposed approach was designed. A simulation example, with an accuracy of 96%, based on the collected data from industrial production scene was provided. Furthermore, the comparison demonstrated that the proposed method exhibited higher classification accuracy than the classifiers based on back propagation-neural network (BP-NN) and support vector machine (SVM) methods. Finally, the proposed approach was applied in an engineering problem of shearer dynamic health assessment. The industrial application results showed that the paper research achievements could be used combining with shearer automation control system in fully mechanized coal face. The simulation and the application results indicated that the proposed method was feasible and outperforming others. PMID:27123002
NASA Astrophysics Data System (ADS)
Jing, Ya-Bing; Liu, Chang-Wen; Bi, Feng-Rong; Bi, Xiao-Yang; Wang, Xia; Shao, Kang
2017-07-01
Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying features. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastICA-SVM achieves higher classification accuracy and makes better generalization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastICA-SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of feature extraction and the fault diagnosis of diesel engines.
The Development of a Deflectometer for Accurate Surface Figure Metrology
NASA Technical Reports Server (NTRS)
Gubarev, Mikhail; Eberhardt, Andrew; Ramsey, Brian; Atkins, Carolyn
2015-01-01
Marshall Space Flight Center is developing the method of direct fabrication for high resolution full-shell x-ray optics. In this technique the x-ray optics axial profiles are figured and polished using a computer-controlled ZeekoIRP600X polishing machine. Based on the Chandra optics fabrication history about one third of the manufacturing time is spent on moving a mirror between fabrication and metrology sites, reinstallation and alignment with either the metrology or fabrication instruments. Also, the accuracy of the alignment significantly affects the ultimate accuracy of the resulting mirrors. In order to achieve higher convergence rate it is highly desirable to have a metrology technique capable of in situ surface figure measurements of the optics under fabrication, so the overall fabrication costs would be greatly reduced while removing the surface errors due to the re-alignment necessary after each metrology cycle during the fabrication. The goal of this feasibility study is to demonstrate if the Phase Measuring Deflectometry can be applied for in situ metrology of full shell x-ray optics. Examples of the full-shell mirror substrates suitable for the direct fabrication
Supervised Learning Applied to Air Traffic Trajectory Classification
NASA Technical Reports Server (NTRS)
Bosson, Christabelle S.; Nikoleris, Tasos
2018-01-01
Given the recent increase of interest in introducing new vehicle types and missions into the National Airspace System, a transition towards a more autonomous air traffic control system is required in order to enable and handle increased density and complexity. This paper presents an exploratory effort of the needed autonomous capabilities by exploring supervised learning techniques in the context of aircraft trajectories. In particular, it focuses on the application of machine learning algorithms and neural network models to a runway recognition trajectory-classification study. It investigates the applicability and effectiveness of various classifiers using datasets containing trajectory records for a month of air traffic. A feature importance and sensitivity analysis are conducted to challenge the chosen time-based datasets and the ten selected features. The study demonstrates that classification accuracy levels of 90% and above can be reached in less than 40 seconds of training for most machine learning classifiers when one track data point, described by the ten selected features at a particular time step, per trajectory is used as input. It also shows that neural network models can achieve similar accuracy levels but at higher training time costs.
Predicting Length of Stay for Obstetric Patients via Electronic Medical Records.
Gao, Cheng; Kho, Abel N; Ivory, Catherine; Osmundson, Sarah; Malin, Bradley A; Chen, You
2017-01-01
Obstetric care refers to the care provided to patients during ante-, intra-, and postpartum periods. Predicting length of stay (LOS) for these patients during their hospitalizations can assist healthcare organizations in allocating hospital resources more effectively and efficiently, ultimately improving maternal care quality and reducing costs to patients. In this paper, we investigate the extent to which LOS can be forecast from a patient's medical history. We introduce a machine learning framework to incorporate a patient's prior conditions (e.g., diagnostic codes) as features in a predictive model for LOS. We evaluate the framework with three years of historical billing data from the electronic medical records of 9188 obstetric patients in a large academic medical center. The results indicate that our framework achieved an average accuracy of 49.3%, which is higher than the baseline accuracy 37.7% (that relies solely on a patient's age). The most predictive features were found to have statistically significant discriminative ability. These features included billing codes for normal delivery (indicative of shorter stay) and antepartum hypertension (indicative of longer stay).
Branch classification: A new mechanism for improving branch predictor performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, P.Y.; Hao, E.; Patt, Y.
There is wide agreement that one of the most significant impediments to the performance of current and future pipelined superscalar processors is the presence of conditional branches in the instruction stream. Speculative execution is one solution to the branch problem, but speculative work is discarded if a branch is mispredicted. For it to be effective, speculative work is discarded if a branch is mispredicted. For it to be effective, speculative execution requires a very accurate branch predictor; 95% accuracy is not good enough. This paper proposes branch classification, a methodology for building more accurate branch predictors. Branch classification allows anmore » individual branch instruction to be associated with the branch predictor best suited to predict its direction. Using this approach, a hybrid branch predictor can be constructed such that each component branch predictor predicts those branches for which it is best suited. To demonstrate the usefulness of branch classification, an example classification scheme is given and a new hybrid predictor is built based on this scheme which achieves a higher prediction accuracy than any branch predictor previously reported in the literature.« less
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
The astro-geodetic use of CCD for gravity field refinement
NASA Astrophysics Data System (ADS)
Gerstbach, G.
1996-07-01
The paper starts with a review of geoid projects, where vertical deflections are more effective than gravimetry. In alpine regions the economy of astrogeoids is at least 10 times higher, but many countries do not make use of this fact - presumably because the measurements are not fully automated up to now. Based upon the experiences of astrometry of high satellites and own tests the author analyses the use of CCD for astro-geodetic measurements. Automation and speeding up will be possible in a few years, the latter depending on the observation scheme. Sensor characteristics, cooling and reading out of the devices should be harmonized. Using line sensors in small prism astrolabes, the CCD accuracy will reach the visual one (±0.2″) within 5-10 years. Astrogeoids can be combined ideally with geological data, because vertical variation of rock densities does not cause systematic effects (contrary to gravimetry). So a geoid of ±5 cm accuracy (achieved in Austria and other alpine countries by 5-10 points per 1000 km 2) can be improved to ±2 cm without additional observations and border effects.
The effect of traditional Persian music on the cardiac functioning of young Iranian women.
Abedi, Behzad; Abbasi, Ataollah; Goshvarpour, Atefeh; Khosroshai, Hamid Tayebi; Javanshir, Elnaz
In the past few decades, several studies have reported the physiological effects of listening to music. The physiological effects of different music types on different people are not similar. Therefore, in the present study, we have sought to examine the effects of traditional Persian music on the cardiac function in young women. Twenty-two healthy females participated in this study. ECG signals were recorded in two conditions: rest and music. For each of the 21 ECG signals (15 morphological and six wavelet based feature) features were extracted. SVM classifier was used for the classification of ECG signals during and before the music. The results showed that the mean of heart rate, the mean amplitude of R-wave, T-wave, and P-wave decreased in response to music. Time-frequency analysis revealed that the mean of the absolute values of the detail coefficients at higher scales increased during rest. The overall accuracy of 91.6% was achieved using polynomial kernel and RBF kernel. Using linear kernel, the best result (with the accuracy rate of 100%) was attained. Copyright © 2016. Published by Elsevier B.V.
Lafont, F.; Ribeiro-Palau, R.; Kazazis, D.; Michon, A.; Couturaud, O.; Consejo, C.; Chassagne, T.; Zielinski, M.; Portail, M.; Jouault, B.; Schopfer, F.; Poirier, W.
2015-01-01
Replacing GaAs by graphene to realize more practical quantum Hall resistance standards (QHRS), accurate to within 10−9 in relative value, but operating at lower magnetic fields than 10 T, is an ongoing goal in metrology. To date, the required accuracy has been reported, only few times, in graphene grown on SiC by Si sublimation, under higher magnetic fields. Here, we report on a graphene device grown by chemical vapour deposition on SiC, which demonstrates such accuracies of the Hall resistance from 10 T up to 19 T at 1.4 K. This is explained by a quantum Hall effect with low dissipation, resulting from strongly localized bulk states at the magnetic length scale, over a wide magnetic field range. Our results show that graphene-based QHRS can replace their GaAs counterparts by operating in as-convenient cryomagnetic conditions, but over an extended magnetic field range. They rely on a promising hybrid and scalable growth method and a fabrication process achieving low-electron-density devices. PMID:25891533
A spectral-knowledge-based approach for urban land-cover discrimination
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.
1987-01-01
A prototype expert system was developed to demonstrate the feasibility of classifying multispectral remotely sensed data on the basis of spectral knowledge. The spectral expert was developed and tested with Thematic Mapper Simulator (TMS) data having eight spectral bands and a spatial resolution of 5 m. A knowledge base was developed that describes the target categories in terms of characteristic spectral relationships. The knowledge base was developed under the following assumptions: the data are calibrated to ground reflectance, the area is well illuminated, the pixels are dominated by a single category, and the target categories can be recognized without the use of spatial knowledge. Classification decisions are made on the basis of convergent evidence as derived from applying the spectral rules to a multiple spatial resolution representation of the image. The spectral expert achieved an accuracy of 80-percent correct or higher in recognizing 11 spectral categories in TMS data for the washington, DC, area. Classification performance can be expected to decrease for data that do not satisfy the above assumptions as illustrated by the 63-percent accuracy for 30-m resolution Thematic Mapper data.
GPS aiding of ocean current determination. [Global Positioning System
NASA Technical Reports Server (NTRS)
Mohan, S. N.
1981-01-01
The navigational accuracy of an oceangoing vessel using conventional GPS p-code data is examined. The GPS signal is transmitted over two carrier frequencies in the L-band at 1575.42 and 1227.6 MHz. Achievable navigational uncertainties of differenced positional estimates are presented as a function of the parameters of the problem, with particular attention given to the effect of sea-state, user equivalent range error, uncompensated antenna motion, varying delay intervals, and reduced data rate examined in the unaided mode. The unmodeled errors resulting from satellite ephemeris uncertainties are shown to be negligible for the GPS-NDS (Navigation Development) satellites. Requirements are met in relatively calm seas, but accuracy degradation by a factor of at least 2 must be anticipated in heavier sea states. The aided mode of operation is examined, and it is shown that requirements can be met by using an inertial measurement unit (IMU) to aid the GPS receiver operation. Since the use of an IMU would mean higher costs, direct Doppler from the GPS satellites is presented as a viable alternative.
NASA Astrophysics Data System (ADS)
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
Local Analysis Approach for Short Wavelength Geopotential Variations
NASA Astrophysics Data System (ADS)
Bender, P. L.
2009-12-01
The value of global spherical harmonic analyses for determining 15 day to 30 day changes in the Earth's gravity field has been demonstrated extensively using data from the GRACE mission and previous missions. However, additional useful information appears to be obtainable from local analyses of the data. A number of such analyses have been carried out by various groups. In the energy approximation, the changes in the height of the satellite altitude geopotential can be determined from the post-fit changes in the satellite separation during individual one-revolution arcs of data from a GRACE-type pair of satellites in a given orbit. For a particular region, it is assumed that short wavelength spatial variations for the arcs crossing that region during a time T of interest would be used to determine corrections to the spherical harmonic results. The main issue in considering higher measurement accuracy in future missions is how much improvement in spatial resolution can be achieved. For this, the shortest wavelengths that can be determined are the most important. And, while the longer wavelength variations are affected by mass distribution changes over much of the globe, the shorter wavelength ones hopefully will be determined mainly by more local changes in the mass distribution. Future missions are expected to have much higher accuracy for measuring changes in the satellite separation than GRACE. However, how large an improvement in the derived results in hydrology will be achieved is still very much a matter of study, particularly because of the effects of uncertainty in the time variations in the atmospheric and oceanic mass distributions. To be specific, it will be assumed that improving the spatial resolution in continental regions away from the coastlines is the objective, and that the satellite altitude is in the range of roughly 290 to 360 km made possible for long missions by drag-free operation. The advantages of putting together the short wavelength results from different arcs crossing the region can be seen most easily for an orbit with moderate inclination, such as 50 to 65 deg., so that the crossing angle between south-to-north (S-N) and N-S passes is fairly large over most regions well away from the poles. In that case, after filtering to pass the shorter wavelengths, the results for a given time interval can be combined to give the short wavelength W-E variations in the geopotential efficiently. For continents with extensive meteorological measurements available, like Europe and North America, a very rough guess at the surface mass density variation uncertainties is about 3 kg/m^2. This is based on the apparent accuracy of carefully calibrated surface pressure measurements. If a substantial part of the resulting uncertainties in the geopotential height at satellite altitude are at wavelengths less than about 1,500 km, they will dominate the measurement uncertainty at short spatial wavelengths for a GRACE-type mission with laser interferometry. This would be the case, even if the uncertainty in the atmospheric and oceanic mass distribution at large distances has a fairly small effect. However, the geopotential accuracy would still be substantially better than for the results achievable with a microwave ranging system.
A new software for prediction of femoral neck fractures.
Testi, Debora; Cappello, Angelo; Sgallari, Fiorella; Rumpf, Martin; Viceconti, Marco
2004-08-01
Femoral neck fractures are an important clinical, social and economic problem. Even if many different attempts have been carried out to improve the accuracy predicting the fracture risk, it was demonstrated in retrospective studies that the standard clinical protocol achieves an accuracy of about 65%. A new procedure was developed including for the prediction not only bone mineral density but also geometric and femoral strength information and achieving an accuracy of about 80% in a previous retrospective study. Aim of the present work was to re-engineer research-based procedures and develop a real-time software for the prediction of the risk for femoral fracture. The result was efficient, repeatable and easy to use software for the evaluation of the femoral neck fracture risk to be inserted in the daily clinical practice providing a useful tool for the improvement of fracture prediction.
Simplified stereo-optical ultrasound plane calibration
NASA Astrophysics Data System (ADS)
Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan
2013-03-01
Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke