Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
Improving substructure identification accuracy of shear structures using virtual control system
NASA Astrophysics Data System (ADS)
Zhang, Dongyu; Yang, Yang; Wang, Tingqiang; Li, Hui
2018-02-01
Substructure identification is a powerful tool to identify the parameters of a complex structure. Previously, the authors developed an inductive substructure identification method for shear structures. The identification error analysis showed that the identification accuracy of this method is significantly influenced by the magnitudes of two key structural responses near a certain frequency; if these responses are unfavorable, the method cannot provide accurate estimation results. In this paper, a novel method is proposed to improve the substructure identification accuracy by introducing a virtual control system (VCS) into the structure. A virtual control system is a self-balanced system, which consists of some control devices and a set of self-balanced forces. The self-balanced forces counterbalance the forces that the control devices apply on the structure. The control devices are combined with the structure to form a controlled structure used to replace the original structure in the substructure identification; and the self-balance forces are treated as known external excitations to the controlled structure. By optimally tuning the VCS’s parameters, the dynamic characteristics of the controlled structure can be changed such that the original structural responses become more favorable for the substructure identification and, thus, the identification accuracy is improved. A numerical example of 6-story shear structure is utilized to verify the effectiveness of the VCS based controlled substructure identification method. Finally, shake table tests are conducted on a 3-story structural model to verify the efficacy of the VCS to enhance the identification accuracy of the structural parameters.
An improved semi-implicit method for structural dynamics analysis
NASA Technical Reports Server (NTRS)
Park, K. C.
1982-01-01
A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
In the formulations of earlier Displacement Transfer Functions for structure shape predictions, the surface strain distributions, along a strain-sensing line, were represented with piecewise linear functions. To improve the shape-prediction accuracies, Improved Displacement Transfer Functions were formulated using piecewise nonlinear strain representations. Through discretization of an embedded beam (depth-wise cross section of a structure along a strain-sensing line) into multiple small domains, piecewise nonlinear functions were used to describe the surface strain distributions along the discretized embedded beam. Such piecewise approach enabled the piecewise integrations of the embedded beam curvature equations to yield slope and deflection equations in recursive forms. The resulting Improved Displacement Transfer Functions, written in summation forms, were expressed in terms of beam geometrical parameters and surface strains along the strain-sensing line. By feeding the surface strains into the Improved Displacement Transfer Functions, structural deflections could be calculated at multiple points for mapping out the overall structural deformed shapes for visual display. The shape-prediction accuracies of the Improved Displacement Transfer Functions were then examined in view of finite-element-calculated deflections using different tapered cantilever tubular beams. It was found that by using the piecewise nonlinear strain representations, the shape-prediction accuracies could be greatly improved, especially for highly-tapered cantilever tubular beams.
Protein homology model refinement by large-scale energy optimization.
Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David
2018-03-20
Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.
Zhang, Wei; Ma, Hong; Yang, Simon X.
2016-01-01
In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products. PMID:26999161
Zhang, Wei; Ma, Hong; Yang, Simon X
2016-03-18
In this research, an improved psychrometer is developed to solve practical issues arising in the relative humidity measurement of challenging drying environments for meat manufacturing in agricultural and agri-food industries. The design in this research focused on the structure of the improved psychrometer, signal conversion, and calculation methods. The experimental results showed the effect of varying psychrometer structure on relative humidity measurement accuracy. An industrial application to dry-cured meat products demonstrated the effective performance of the improved psychrometer being used as a relative humidity measurement sensor in meat-drying rooms. In a drying environment for meat manufacturing, the achieved measurement accuracy for relative humidity using the improved psychrometer was ±0.6%. The system test results showed that the improved psychrometer can provide reliable and long-term stable relative humidity measurements with high accuracy in the drying system of meat products.
Improved finite element methodology for integrated thermal structural analysis
NASA Technical Reports Server (NTRS)
Dechaumphai, P.; Thornton, E. A.
1982-01-01
An integrated thermal-structural finite element approach for efficient coupling of thermal and structural analysis is presented. New thermal finite elements which yield exact nodal and element temperatures for one dimensional linear steady state heat transfer problems are developed. A nodeless variable formulation is used to establish improved thermal finite elements for one dimensional nonlinear transient and two dimensional linear transient heat transfer problems. The thermal finite elements provide detailed temperature distributions without using additional element nodes and permit a common discretization with lower order congruent structural finite elements. The accuracy of the integrated approach is evaluated by comparisons with analytical solutions and conventional finite element thermal structural analyses for a number of academic and more realistic problems. Results indicate that the approach provides a significant improvement in the accuracy and efficiency of thermal stress analysis for structures with complex temperature distributions.
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954
A community detection algorithm based on structural similarity
NASA Astrophysics Data System (ADS)
Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu
2017-09-01
In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
Paans, Wolter; Sermeus, Walter; Nieweg, Roos Mb; Krijnen, Wim P; van der Schans, Cees P
2012-08-01
This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses.Knowledge sources can support nurses in deriving diagnoses. A nurse's disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. A randomised factorial design was used in 2008-2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse's age and the reasoning skills of `deduction' and `analysis'. Improving nurses' dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses.
2012-01-01
Background This paper reports a study about the effect of knowledge sources, such as handbooks, an assessment format and a predefined record structure for diagnostic documentation, as well as the influence of knowledge, disposition toward critical thinking and reasoning skills, on the accuracy of nursing diagnoses. Knowledge sources can support nurses in deriving diagnoses. A nurse’s disposition toward critical thinking and reasoning skills is also thought to influence the accuracy of his or her nursing diagnoses. Method A randomised factorial design was used in 2008–2009 to determine the effect of knowledge sources. We used the following instruments to assess the influence of ready knowledge, disposition, and reasoning skills on the accuracy of diagnoses: (1) a knowledge inventory, (2) the California Critical Thinking Disposition Inventory, and (3) the Health Science Reasoning Test. Nurses (n = 249) were randomly assigned to one of four factorial groups, and were instructed to derive diagnoses based on an assessment interview with a simulated patient/actor. Results The use of a predefined record structure resulted in a significantly higher accuracy of nursing diagnoses. A regression analysis reveals that almost half of the variance in the accuracy of diagnoses is explained by the use of a predefined record structure, a nurse’s age and the reasoning skills of `deduction’ and `analysis’. Conclusions Improving nurses’ dispositions toward critical thinking and reasoning skills, and the use of a predefined record structure, improves accuracy of nursing diagnoses. PMID:22852577
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
A stereo remote sensing feature selection method based on artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi
2014-05-01
To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.
Iterative refinement of structure-based sequence alignments by Seed Extension
Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook
2009-01-01
Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133
Protein Secondary Structure Prediction Using AutoEncoder Network and Bayes Classifier
NASA Astrophysics Data System (ADS)
Wang, Leilei; Cheng, Jinyong
2018-03-01
Protein secondary structure prediction is belong to bioinformatics,and it's important in research area. In this paper, we propose a new prediction way of protein using bayes classifier and autoEncoder network. Our experiments show some algorithms including the construction of the model, the classification of parameters and so on. The data set is a typical CB513 data set for protein. In terms of accuracy, the method is the cross validation based on the 3-fold. Then we can get the Q3 accuracy. Paper results illustrate that the autoencoder network improved the prediction accuracy of protein secondary structure.
Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1975-01-01
A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.
Rapid condition assessment of structural condition after a blast using state-space identification
NASA Astrophysics Data System (ADS)
Eskew, Edward; Jang, Shinae
2015-04-01
After a blast event, it is important to quickly quantify the structural damage for emergency operations. In order improve the speed, accuracy, and efficiency of condition assessments after a blast, the authors have previously performed work to develop a methodology for rapid assessment of the structural condition of a building after a blast. The method involved determining a post-event equivalent stiffness matrix using vibration measurements and a finite element (FE) model. A structural model was built for the damaged structure based on the equivalent stiffness, and inter-story drifts from the blast are determined using numerical simulations, with forces determined from the blast parameters. The inter-story drifts are then compared to blast design conditions to assess the structures damage. This method still involved engineering judgment in terms of determining significant frequencies, which can lead to error, especially with noisy measurements. In an effort to improve accuracy and automate the process, this paper will look into a similar method of rapid condition assessment using subspace state-space identification. The accuracy of the method will be tested using a benchmark structural model, as well as experimental testing. The blast damage assessments will be validated using pressure-impulse (P-I) diagrams, which present the condition limits across blast parameters. Comparisons between P-I diagrams generated using the true system parameters and equivalent parameters will show the accuracy of the rapid condition based blast assessments.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin
2015-01-01
Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435
Bio-knowledge based filters improve residue-residue contact prediction accuracy.
Wozniak, P P; Pelc, J; Skrzypecki, M; Vriend, G; Kotulska, M
2018-05-29
Residue-residue contact prediction through direct coupling analysis has reached impressive accuracy, but yet higher accuracy will be needed to allow for routine modelling of protein structures. One way to improve the prediction accuracy is to filter predicted contacts using knowledge about the particular protein of interest or knowledge about protein structures in general. We focus on the latter and discuss a set of filters that can be used to remove false positive contact predictions. Each filter depends on one or a few cut-off parameters for which the filter performance was investigated. Combining all filters while using default parameters resulted for a test-set of 851 protein domains in the removal of 29% of the predictions of which 92% were indeed false positives. All data and scripts are available from http://comprec-lin.iiar.pwr.edu.pl/FPfilter/. malgorzata.kotulska@pwr.edu.pl. Supplementary data are available at Bioinformatics online.
g-Factor of heavy ions: a new access to the fine structure constant.
Shabaev, V M; Glazov, D A; Oreshkina, N S; Volotka, A V; Plunien, G; Kluge, H-J; Quint, W
2006-06-30
A possibility for a determination of the fine structure constant in experiments on the bound-electron g-factor is examined. It is found that studying a specific difference of the g-factors of B- and H-like ions of the same spinless isotope in the Pb region to the currently accessible experimental accuracy of 7 x 10(-10) would lead to a determination of the fine structure constant to an accuracy which is better than that of the currently accepted value. Further improvements of the experimental and theoretical accuracy could provide a value of the fine structure constant which is several times more precise than the currently accepted one.
Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.
Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe
2017-10-01
Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.
Development of CFRP mirrors for space telescopes
NASA Astrophysics Data System (ADS)
Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo
2013-09-01
CFRP (Caron fiber reinforced plastics) have superior properties of high specific elasticity and low thermal expansion for satellite telescope structures. However, difficulties to achieve required surface accuracy and to ensure stability in orbit have discouraged CFRP application as main mirrors. We have developed ultra-light weight and high precision CFRP mirrors of sandwich structures composed of CFRP skins and CFRP cores using a replica technique. Shape accuracy of the demonstrated mirrors of 150 mm in diameter was 0.8 μm RMS (Root Mean Square) and surface roughness was 5 nm RMS as fabricated. Further optimization of fabrication process conditions to improve surface accuracy was studied using flat sandwich panels. Then surface accuracy of the flat CFRP sandwich panels of 150 mm square was improved to flatness of 0.2 μm RMS with surface roughness of 6 nm RMS. The surface accuracy vs. size of trial models indicated high possibility of fabrication of over 1m size mirrors with surface accuracy of 1μm. Feasibility of CFRP mirrors for low temperature applications was examined for JASMINE project as an example. Stability of surface accuracy of CFRP mirrors against temperature and moisture was discussed.
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model
Li, Xiaoqing; Wang, Yu
2018-01-01
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.
Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu
2018-01-19
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.
Shape accuracy optimization for cable-rib tension deployable antenna structure with tensioned cables
NASA Astrophysics Data System (ADS)
Liu, Ruiwei; Guo, Hongwei; Liu, Rongqiang; Wang, Hongxiang; Tang, Dewei; Song, Xiaoke
2017-11-01
Shape accuracy is of substantial importance in deployable structures as the demand for large-scale deployable structures in various fields, especially in aerospace engineering, increases. The main purpose of this paper is to present a shape accuracy optimization method to find the optimal pretensions for the desired shape of cable-rib tension deployable antenna structure with tensioned cables. First, an analysis model of the deployable structure is established by using finite element method. In this model, geometrical nonlinearity is considered for the cable element and beam element. Flexible deformations of the deployable structure under the action of cable network and tensioned cables are subsequently analyzed separately. Moreover, the influence of pretension of tensioned cables on natural frequencies is studied. Based on the results, a genetic algorithm is used to find a set of reasonable pretension and thus minimize structural deformation under the first natural frequency constraint. Finally, numerical simulations are presented to analyze the deployable structure under two kinds of constraints. Results show that the shape accuracy and natural frequencies of deployable structure can be effectively improved by pretension optimization.
RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.
Jabbari, Hosna; Wark, Ian; Montemagno, Carlo
2018-01-01
RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
2007-01-01
Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273
Development of Accurate Structure for Mounting and Aligning Thin-Foil X-Ray Mirrors
NASA Technical Reports Server (NTRS)
Heilmann, Ralf K.
2001-01-01
The goal of this work was to improve the assembly accuracy for foil x-ray optics as produced by the high-energy astrophysics group at the NASA Goddard Space Flight Center. Two main design choices lead to an alignment concept that was shown to improve accuracy well within the requirements currently pursued by the Constellation-X Spectroscopy X-Ray Telescope (SXT).
Privacy-Preserving Accountable Accuracy Management Systems (PAAMS)
NASA Astrophysics Data System (ADS)
Thomas, Roshan K.; Sandhu, Ravi; Bertino, Elisa; Arpinar, Budak; Xu, Shouhuai
We argue for the design of “Privacy-preserving Accountable Accuracy Management Systems (PAAMS)”. The designs of such systems recognize from the onset that accuracy, accountability, and privacy management are intertwined. As such, these systems have to dynamically manage the tradeoffs between these (often conflicting) objectives. For example, accuracy in such systems can be improved by providing better accountability links between structured and unstructured information. Further, accuracy may be enhanced if access to private information is allowed in controllable and accountable ways. Our proposed approach involves three key elements. First, a model to link unstructured information such as that found in email, image and document repositories with structured information such as that in traditional databases. Second, a model for accuracy management and entity disambiguation by proactively preventing, detecting and tracing errors in information bases. Third, a model to provide privacy-governed operation as accountability and accuracy are managed.
Improving the Accuracy of Structural Fatigue Life Tracking Through Dynamic Strain Sensor Calibration
2011-09-01
strength corrosion resistant 7075 -T6 alloy, and included hinge lugs, a bulkhead, spars, and wing skins that were fastened together using welds, rivets...release, distribution unlimited 13. SUPPLEMENTARY NOTES See also ADA580921. International Workshop on Structural Health Monitoring: From Condition -based...greater than 10% under the same loading conditions [1]. These differences must be accounted for to have acceptable accuracy levels in the ultimate
NASA Astrophysics Data System (ADS)
Vereschaka, Alexey; Mokritskii, Boris; Mokritskaya, Elena; Sharipov, Oleg; Oganyan, Maksim
2018-03-01
The paper deals with the challenges of the application of two-component end mills, which represent a combination of a carbide cutting part and a shank made of cheaper structural material. The calculations of strains and deformations of composite mills were carried out in comparison with solid carbide mills, with the use of the finite element method. The study also involved the comparative analysis of accuracy parameters of machining with monolithic mills and two-component mills with various shank materials. As a result of the conducted cutting tests in milling aluminum alloy with monolithic and two-component end mills with specially developed multilayer composite nano-structured coatings, it has been found that the use of such coatings can reduce strains and, correspondingly, deformations, which can improve the accuracy of machining. Thus, the application of two-component end mills with multilayer composite nano-structured coatings can provide a reduction in the cost of machining while maintaining or even improving the tool life and machining accuracy parameters.
NASA Astrophysics Data System (ADS)
Park, M.; Stenstrom, M. K.
2004-12-01
Recognizing urban information from the satellite imagery is problematic due to the diverse features and dynamic changes of urban landuse. The use of Landsat imagery for urban land use classification involves inherent uncertainty due to its spatial resolution and the low separability among land uses. To resolve the uncertainty problem, we investigated the performance of Bayesian networks to classify urban land use since Bayesian networks provide a quantitative way of handling uncertainty and have been successfully used in many areas. In this study, we developed the optimized networks for urban land use classification from Landsat ETM+ images of Marina del Rey area based on USGS land cover/use classification level III. The networks started from a tree structure based on mutual information between variables and added the links to improve accuracy. This methodology offers several advantages: (1) The network structure shows the dependency relationships between variables. The class node value can be predicted even with particular band information missing due to sensor system error. The missing information can be inferred from other dependent bands. (2) The network structure provides information of variables that are important for the classification, which is not available from conventional classification methods such as neural networks and maximum likelihood classification. In our case, for example, bands 1, 5 and 6 are the most important inputs in determining the land use of each pixel. (3) The networks can be reduced with those input variables important for classification. This minimizes the problem without considering all possible variables. We also examined the effect of incorporating ancillary data: geospatial information such as X and Y coordinate values of each pixel and DEM data, and vegetation indices such as NDVI and Tasseled Cap transformation. The results showed that the locational information improved overall accuracy (81%) and kappa coefficient (76%), and lowered the omission and commission errors compared with using only spectral data (accuracy 71%, kappa coefficient 62%). Incorporating DEM data did not significantly improve overall accuracy (74%) and kappa coefficient (66%) but lowered the omission and commission errors. Incorporating NDVI did not much improve the overall accuracy (72%) and k coefficient (65%). Including Tasseled Cap transformation reduced the accuracy (accuracy 70%, kappa 61%). Therefore, additional information from the DEM and vegetation indices was not useful as locational ancillary data.
New insights from cluster analysis methods for RNA secondary structure prediction
Rogers, Emily; Heitsch, Christine
2016-01-01
A widening gap exists between the best practices for RNA secondary structure prediction developed by computational researchers and the methods used in practice by experimentalists. Minimum free energy (MFE) predictions, although broadly used, are outperformed by methods which sample from the Boltzmann distribution and data mine the results. In particular, moving beyond the single structure prediction paradigm yields substantial gains in accuracy. Furthermore, the largest improvements in accuracy and precision come from viewing secondary structures not at the base pair level but at lower granularity/higher abstraction. This suggests that random errors affecting precision and systematic ones affecting accuracy are both reduced by this “fuzzier” view of secondary structures. Thus experimentalists who are willing to adopt a more rigorous, multilayered approach to secondary structure prediction by iterating through these levels of granularity will be much better able to capture fundamental aspects of RNA base pairing. PMID:26971529
Sixty-five years of the long march in protein secondary structure prediction: the final stretch?
Yang, Yuedong; Gao, Jianzhao; Wang, Jihua; Heffernan, Rhys; Hanson, Jack; Paliwal, Kuldip; Zhou, Yaoqi
2018-01-01
Abstract Protein secondary structure prediction began in 1951 when Pauling and Corey predicted helical and sheet conformations for protein polypeptide backbone even before the first protein structure was determined. Sixty-five years later, powerful new methods breathe new life into this field. The highest three-state accuracy without relying on structure templates is now at 82–84%, a number unthinkable just a few years ago. These improvements came from increasingly larger databases of protein sequences and structures for training, the use of template secondary structure information and more powerful deep learning techniques. As we are approaching to the theoretical limit of three-state prediction (88–90%), alternative to secondary structure prediction (prediction of backbone torsion angles and Cα-atom-based angles and torsion angles) not only has more room for further improvement but also allows direct prediction of three-dimensional fragment structures with constantly improved accuracy. About 20% of all 40-residue fragments in a database of 1199 non-redundant proteins have <6 Å root-mean-squared distance from the native conformations by SPIDER2. More powerful deep learning methods with improved capability of capturing long-range interactions begin to emerge as the next generation of techniques for secondary structure prediction. The time has come to finish off the final stretch of the long march towards protein secondary structure prediction. PMID:28040746
MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.
Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K
2015-04-01
Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.
Olejník, Peter; Nosal, Matej; Havran, Tomas; Furdova, Adriana; Cizmar, Maros; Slabej, Michal; Thurzo, Andrej; Vitovic, Pavol; Klvac, Martin; Acel, Tibor; Masura, Jozef
2017-01-01
To evaluate the accuracy of the three-dimensional (3D) printing of cardiovascular structures. To explore whether utilisation of 3D printed heart replicas can improve surgical and catheter interventional planning in patients with complex congenital heart defects. Between December 2014 and November 2015 we fabricated eight cardiovascular models based on computed tomography data in patients with complex spatial anatomical relationships of cardiovascular structures. A Bland-Altman analysis was used to assess the accuracy of 3D printing by comparing dimension measurements at analogous anatomical locations between the printed models and digital imagery data, as well as between printed models and in vivo surgical findings. The contribution of 3D printed heart models for perioperative planning improvement was evaluated in the four most representative patients. Bland-Altman analysis confirmed the high accuracy of 3D cardiovascular printing. Each printed model offered an improved spatial anatomical orientation of cardiovascular structures. Current 3D printers can produce authentic copies of patients` cardiovascular systems from computed tomography data. The use of 3D printed models can facilitate surgical or catheter interventional procedures in patients with complex congenital heart defects due to better preoperative planning and intraoperative orientation.
Akanno, E C; Schenkel, F S; Sargolzaei, M; Friendship, R M; Robinson, J A B
2014-10-01
Genetic improvement of pigs in tropical developing countries has focused on imported exotic populations which have been subjected to intensive selection with attendant high population-wide linkage disequilibrium (LD). Presently, indigenous pig population with limited selection and low LD are being considered for improvement. Given that the infrastructure for genetic improvement using the conventional BLUP selection methods are lacking, a genome-wide selection (GS) program was proposed for developing countries. A simulation study was conducted to evaluate the option of using 60 K SNP panel and observed amount of LD in the exotic and indigenous pig populations. Several scenarios were evaluated including different size and structure of training and validation populations, different selection methods and long-term accuracy of GS in different population/breeding structures and traits. The training set included previously selected exotic population, unselected indigenous population and their crossbreds. Traits studied included number born alive (NBA), average daily gain (ADG) and back fat thickness (BFT). The ridge regression method was used to train the prediction model. The results showed that accuracies of genomic breeding values (GBVs) in the range of 0.30 (NBA) to 0.86 (BFT) in the validation population are expected if high density marker panels are utilized. The GS method improved accuracy of breeding values better than pedigree-based approach for traits with low heritability and in young animals with no performance data. Crossbred training population performed better than purebreds when validation was in populations with similar or a different structure as in the training set. Genome-wide selection holds promise for genetic improvement of pigs in the tropics. © 2014 Blackwell Verlag GmbH.
Pediatric Surgeon-Directed Wound Classification Improves Accuracy
Zens, Tiffany J.; Rusy, Deborah A.; Gosain, Ankush
2015-01-01
Background Surgical wound classification (SWC) communicates the degree of contamination in the surgical field and is used to stratify risk of surgical site infection and compare outcomes amongst centers. We hypothesized that changing from nurse-directed to surgeon-directed SWC during a structured operative debrief we will improve accuracy of documentation. Methods An IRB-approved retrospective chart review was performed. Two time periods were defined: initially, SWC was determined and recorded by the circulating nurse (Pre-Debrief 6/2012-5/2013) and allowing six months for adoption and education, we implemented a structured operative debriefing including surgeon-directed SWC (Post-Debrief 1/2014-8/2014). Accuracy of SWC was determined for four commonly performed Pediatric General Surgery operations: inguinal hernia repair (clean), gastrostomy +/− Nissen fundoplication (clean-contaminated), appendectomy without perforation (contaminated), and appendectomy with perforation (dirty). Results 183 cases Pre-Debrief and 142 cases Post-Debrief met inclusion criteria. No differences between time periods were noted in regards to patient demographics, ASA class, or case mix. Accuracy of wound classification improved Post-Debrief (42% vs. 58.5%, p=0.003). Pre-Debrief, 26.8% of cases were overestimated or underestimated by more than one wound class, vs. 3.5% of cases Post-Debrief (p<0.001). Interestingly, the majority of Post-Debrief contaminated cases were incorrectly classified as clean-contaminated. Conclusions Implementation of a structured operative debrief including surgeon-directed SWC improves the percentage of correctly classified wounds and decreases the degree of inaccuracy in incorrectly classified cases. However, following implementation of the debriefing, we still observed a 41.5% rate of incorrect documentation, most notably in contaminated cases, indicating further education and process improvement is needed. PMID:27020829
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
[Accuracy improvement of spectral classification of crop using microwave backscatter data].
Jia, Kun; Li, Qiang-Zi; Tian, Yi-Chen; Wu, Bing-Fang; Zhang, Fei-Fei; Meng, Ji-Hua
2011-02-01
In the present study, VV polarization microwave backscatter data used for improving accuracies of spectral classification of crop is investigated. Classification accuracy using different classifiers based on the fusion data of HJ satellite multi-spectral and Envisat ASAR VV backscatter data are compared. The results indicate that fusion data can take full advantage of spectral information of HJ multi-spectral data and the structure sensitivity feature of ASAR VV polarization data. The fusion data enlarges the spectral difference among different classifications and improves crop classification accuracy. The classification accuracy using fusion data can be increased by 5 percent compared to the single HJ data. Furthermore, ASAR VV polarization data is sensitive to non-agrarian area of planted field, and VV polarization data joined classification can effectively distinguish the field border. VV polarization data associating with multi-spectral data used in crop classification enlarges the application of satellite data and has the potential of spread in the domain of agriculture.
Improving the accuracy of macromolecular structure refinement at 7 Å resolution.
Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F
2012-06-06
In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impact of improved information on the structure of world grain trade. [wheat
NASA Technical Reports Server (NTRS)
1979-01-01
The benefits to be derived by the United States from improvements in global grain crop forecasting capability are discussed. The improvements in forecasting accuracy, which are a result of the use of satellite technology in conjunction with existing ground based estimating procedures are described. The degree of forecasting accuracy to be obtained from satellite technology is also examined. Specific emphasis is placed on wheat production in seven countries/regions: the United States; Canada; Argentina; Australia; Western Europe; the USSR; and all other countries in a group.
NASA Astrophysics Data System (ADS)
Lieu, Richard
2018-01-01
A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009
An adaptive deep-coupled GNSS/INS navigation system with hybrid pre-filter processing
NASA Astrophysics Data System (ADS)
Wu, Mouyan; Ding, Jicheng; Zhao, Lin; Kang, Yingyao; Luo, Zhibin
2018-02-01
The deep-coupling of a global navigation satellite system (GNSS) with an inertial navigation system (INS) can provide accurate and reliable navigation information. There are several kinds of deeply-coupled structures. These can be divided mainly into coherent and non-coherent pre-filter based structures, which have their own strong advantages and disadvantages, especially in accuracy and robustness. In this paper, the existing pre-filters of the deeply-coupled structures are analyzed and modified to improve them firstly. Then, an adaptive GNSS/INS deeply-coupled algorithm with hybrid pre-filters processing is proposed to combine the advantages of coherent and non-coherent structures. An adaptive hysteresis controller is designed to implement the hybrid pre-filters processing strategy. The simulation and vehicle test results show that the adaptive deeply-coupled algorithm with hybrid pre-filters processing can effectively improve navigation accuracy and robustness, especially in a GNSS-challenged environment.
Lessons in molecular recognition. 2. Assessing and improving cross-docking accuracy.
Sutherland, Jeffrey J; Nandigam, Ravi K; Erickson, Jon A; Vieth, Michal
2007-01-01
Docking methods are used to predict the manner in which a ligand binds to a protein receptor. Many studies have assessed the success rate of programs in self-docking tests, whereby a ligand is docked into the protein structure from which it was extracted. Cross-docking, or using a protein structure from a complex containing a different ligand, provides a more realistic assessment of a docking program's ability to reproduce X-ray results. In this work, cross-docking was performed with CDocker, Fred, and Rocs using multiple X-ray structures for eight proteins (two kinases, one nuclear hormone receptor, one serine protease, two metalloproteases, and two phosphodiesterases). While average cross-docking accuracy is not encouraging, it is shown that using the protein structure from the complex that contains the bound ligand most similar to the docked ligand increases docking accuracy for all methods ("similarity selection"). Identifying the most successful protein conformer ("best selection") and similarity selection substantially reduce the difference between self-docking and average cross-docking accuracy. We identify universal predictors of docking accuracy (i.e., showing consistent behavior across most protein-method combinations), and show that models for predicting docking accuracy built using these parameters can be used to select the most appropriate docking method.
Liu, Jiamin; Kabadi, Suraj; Van Uitert, Robert; Petrick, Nicholas; Deriche, Rachid; Summers, Ronald M.
2011-01-01
Purpose: Surface curvatures are important geometric features for the computer-aided analysis and detection of polyps in CT colonography (CTC). However, the general kernel approach for curvature computation can yield erroneous results for small polyps and for polyps that lie on haustral folds. Those erroneous curvatures will reduce the performance of polyp detection. This paper presents an analysis of interpolation’s effect on curvature estimation for thin structures and its application on computer-aided detection of small polyps in CTC. Methods: The authors demonstrated that a simple technique, image interpolation, can improve the accuracy of curvature estimation for thin structures and thus significantly improve the sensitivity of small polyp detection in CTC. Results: Our experiments showed that the merits of interpolating included more accurate curvature values for simulated data, and isolation of polyps near folds for clinical data. After testing on a large clinical data set, it was observed that sensitivities with linear, quadratic B-spline and cubic B-spline interpolations significantly improved the sensitivity for small polyp detection. Conclusions: The image interpolation can improve the accuracy of curvature estimation for thin structures and thus improve the computer-aided detection of small polyps in CTC. PMID:21859029
NASA Astrophysics Data System (ADS)
Huesca Martinez, M.; Garcia, M.; Roth, K. L.; Casas, A.; Ustin, S.
2015-12-01
There is a well-established need within the remote sensing community for improved estimation of canopy structure and understanding of its influence on the retrieval of leaf biochemical properties. The aim of this project was to evaluate the estimation of structural properties directly from hyperspectral data, with the broader goal that these might be used to constrain retrievals of canopy chemistry. We used NASA's Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) to discriminate different canopy structural types, defined in terms of biomass, canopy height and vegetation complexity, and compared them to estimates of these properties measured by LiDAR data. We tested a large number of optical metrics, including single narrow band reflectance and 1st derivative, sub-pixel cover fractions, narrow-band indices, spectral absorption features, and Principal Component Analysis components. Canopy structural types were identified and classified from different forest types by integrating structural traits measured by optical metrics using the Random Forest (RF) classifier. The classification accuracy was above 70% in most of the vegetation scenarios. The best overall accuracy was achieved for hardwood forest (>80% accuracy) and the lowest accuracy was found in mixed forest (~70% accuracy). Furthermore, similarly high accuracy was found when the RF classifier was applied to a spatially independent dataset, showing significant portability for the method used. Results show that all spectral regions played a role in canopy structure assessment, thus the whole spectrum is required. Furthermore, optical metrics derived from AVIRIS proved to be a powerful technique for structural attribute mapping. This research illustrates the potential for using optical properties to distinguish several canopy structural types in different forest types, and these may be used to constrain quantitative measurements of absorbing properties in future research.
Improved method for predicting protein fold patterns with ensemble classifiers.
Chen, W; Liu, X; Huang, Y; Jiang, Y; Zou, Q; Lin, C
2012-01-27
Protein folding is recognized as a critical problem in the field of biophysics in the 21st century. Predicting protein-folding patterns is challenging due to the complex structure of proteins. In an attempt to solve this problem, we employed ensemble classifiers to improve prediction accuracy. In our experiments, 188-dimensional features were extracted based on the composition and physical-chemical property of proteins and 20-dimensional features were selected using a coupled position-specific scoring matrix. Compared with traditional prediction methods, these methods were superior in terms of prediction accuracy. The 188-dimensional feature-based method achieved 71.2% accuracy in five cross-validations. The accuracy rose to 77% when we used a 20-dimensional feature vector. These methods were used on recent data, with 54.2% accuracy. Source codes and dataset, together with web server and software tools for prediction, are available at: http://datamining.xmu.edu.cn/main/~cwc/ProteinPredict.html.
Ogorzalek, Tadeusz L; Hura, Greg L; Belsom, Adam; Burnett, Kathryn H; Kryshtafovych, Andriy; Tainer, John A; Rappsilber, Juri; Tsutakawa, Susan E; Fidelis, Krzysztof
2018-03-01
Experimental data offers empowering constraints for structure prediction. These constraints can be used to filter equivalently scored models or more powerfully within optimization functions toward prediction. In CASP12, Small Angle X-ray Scattering (SAXS) and Cross-Linking Mass Spectrometry (CLMS) data, measured on an exemplary set of novel fold targets, were provided to the CASP community of protein structure predictors. As solution-based techniques, SAXS and CLMS can efficiently measure states of the full-length sequence in its native solution conformation and assembly. However, this experimental data did not substantially improve prediction accuracy judged by fits to crystallographic models. One issue, beyond intrinsic limitations of the algorithms, was a disconnect between crystal structures and solution-based measurements. Our analyses show that many targets had substantial percentages of disordered regions (up to 40%) or were multimeric or both. Thus, solution measurements of flexibility and assembly support variations that may confound prediction algorithms trained on crystallographic data and expecting globular fully-folded monomeric proteins. Here, we consider the CLMS and SAXS data collected, the information in these solution measurements, and the challenges in incorporating them into computational prediction. As improvement opportunities were only partly realized in CASP12, we provide guidance on how data from the full-length biological unit and the solution state can better aid prediction of the folded monomer or subunit. We furthermore describe strategic integrations of solution measurements with computational prediction programs with the aim of substantially improving foundational knowledge and the accuracy of computational algorithms for biologically-relevant structure predictions for proteins in solution. © 2018 Wiley Periodicals, Inc.
Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery.
Li, Guiying; Lu, Dengsheng; Moran, Emilio; Hetrick, Scott
2011-01-01
This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms - maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes.
Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery
LI, GUIYING; LU, DENGSHENG; MORAN, EMILIO; HETRICK, SCOTT
2011-01-01
This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms – maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes. PMID:22368311
Improving coding accuracy in an academic practice.
Nguyen, Dana; O'Mara, Heather; Powell, Robert
2017-01-01
Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Integrated transient thermal-structural finite element analysis
NASA Technical Reports Server (NTRS)
Thornton, E. A.; Dechaumphai, P.; Wieting, A. R.; Tamma, K. K.
1981-01-01
An integrated thermal structural finite element approach for efficient coupling of transient thermal and structural analysis is presented. Integrated thermal structural rod and one dimensional axisymmetric elements considering conduction and convection are developed and used in transient thermal structural applications. The improved accuracy of the integrated approach is illustrated by comparisons with exact transient heat conduction elasticity solutions and conventional finite element thermal finite element structural analyses.
Context matters: the structure of task goals affects accuracy in multiple-target visual search.
Clark, Kait; Cain, Matthew S; Adcock, R Alison; Mitroff, Stephen R
2014-05-01
Career visual searchers such as radiologists and airport security screeners strive to conduct accurate visual searches, but despite extensive training, errors still occur. A key difference between searches in radiology and airport security is the structure of the search task: Radiologists typically scan a certain number of medical images (fixed objective), and airport security screeners typically search X-rays for a specified time period (fixed duration). Might these structural differences affect accuracy? We compared performance on a search task administered either under constraints that approximated radiology or airport security. Some displays contained more than one target because the presence of multiple targets is an established source of errors for career searchers, and accuracy for additional targets tends to be especially sensitive to contextual conditions. Results indicate that participants searching within the fixed objective framework produced more multiple-target search errors; thus, adopting a fixed duration framework could improve accuracy for career searchers. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-11
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieu, Richard
A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this methodmore » of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.« less
Protein structure refinement using a quantum mechanics-based chemical shielding predictor.
Bratholm, Lars A; Jensen, Jan H
2017-03-01
The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift.
The origin of consistent protein structure refinement from structural averaging.
Park, Hahnbeom; DiMaio, Frank; Baker, David
2015-06-02
Recent studies have shown that explicit solvent molecular dynamics (MD) simulation followed by structural averaging can consistently improve protein structure models. We find that improvement upon averaging is not limited to explicit water MD simulation, as consistent improvements are also observed for more efficient implicit solvent MD or Monte Carlo minimization simulations. To determine the origin of these improvements, we examine the changes in model accuracy brought about by averaging at the individual residue level. We find that the improvement in model quality from averaging results from the superposition of two effects: a dampening of deviations from the correct structure in the least well modeled regions, and a reinforcement of consistent movements towards the correct structure in better modeled regions. These observations are consistent with an energy landscape model in which the magnitude of the energy gradient toward the native structure decreases with increasing distance from the native state. Copyright © 2015 Elsevier Ltd. All rights reserved.
Luo, Xiongbiao; Jayarathne, Uditha L; McLeod, A Jonathan; Mori, Kensaku
2014-01-01
Endoscopic navigation generally integrates different modalities of sensory information in order to continuously locate an endoscope relative to suspicious tissues in the body during interventions. Current electromagnetic tracking techniques for endoscopic navigation have limited accuracy due to tissue deformation and magnetic field distortion. To avoid these limitations and improve the endoscopic localization accuracy, this paper proposes a new endoscopic navigation framework that uses an optical mouse sensor to measure the endoscope movements along its viewing direction. We then enhance the differential evolution algorithm by modifying its mutation operation. Based on the enhanced differential evolution method, these movement measurements and image structural patches in endoscopic videos are fused to accurately determine the endoscope position. An evaluation on a dynamic phantom demonstrated that our method provides a more accurate navigation framework. Compared to state-of-the-art methods, it improved the navigation accuracy from 2.4 to 1.6 mm and reduced the processing time from 2.8 to 0.9 seconds.
Damage detection of structures with detrended fluctuation and detrended cross-correlation analyses
NASA Astrophysics Data System (ADS)
Lin, Tzu-Kang; Fajri, Haikal
2017-03-01
Recently, fractal analysis has shown its potential for damage detection and assessment in fields such as biomedical and mechanical engineering. For its practicability in interpreting irregular, complex, and disordered phenomena, a structural health monitoring (SHM) system based on detrended fluctuation analysis (DFA) and detrended cross-correlation analysis (DCCA) is proposed. First, damage conditions can be swiftly detected by evaluating ambient vibration signals measured from a structure through DFA. Damage locations can then be determined by analyzing the cross correlation of signals of different floors by applying DCCA. A damage index is also proposed based on multi-scale DCCA curves to improve the damage location accuracy. To verify the performance of the proposed SHM system, a four-story numerical model was used to simulate various damage conditions with different noise levels. Furthermore, an experimental verification was conducted on a seven-story benchmark structure to assess the potential damage. The results revealed that the DFA method could detect the damage conditions satisfactorily, and damage locations can be identified through the DCCA method with an accuracy of 75%. Moreover, damage locations can be correctly assessed by the damage index method with an improved accuracy of 87.5%. The proposed SHM system has promising application in practical implementations.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Two-step FEM-based Liver-CT registration: improving internal and external accuracy
NASA Astrophysics Data System (ADS)
Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan
2014-03-01
To know the exact location of the internal structures of the organs, especially the vasculature, is of great importance for the clinicians. This information allows them to know which structures/vessels will be affected by certain therapy and therefore to better treat the patients. However the use of internal structures for registration is often disregarded especially in physical based registration methods. In this paper we propose an algorithm that uses finite element methods to carry out a registration of liver volumes that will not only have accuracy in the boundaries of the organ but also in the interior. Therefore a graph matching algorithm is used to find correspondences between the vessel trees of the two livers to be registered. In addition to this an adaptive volumetric mesh is generated that contains nodes in the locations in which correspondences were found. The displacements derived from those correspondences are the input for the initial deformation of the model. The first deformation brings the internal structures to their final deformed positions and the surfaces close to it. Finally, thin plate splines are used to refine the solution at the boundaries of the organ achieving an improvement in the accuracy of 71%. The algorithm has been evaluated in CT clinical images of the abdomen.
NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.
Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua
2013-01-01
Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp.
The design and improvement of radial tire molding machine
NASA Astrophysics Data System (ADS)
Wang, Wenhao; Zhang, Tao
2018-04-01
This paper presented that the high accuracy semisteel meridian tire molding machine structure configurations, combining tyre high precision characteristics, the original structure and parameter optimization, technology improvement innovation design period of opening and closing machine rotary shaping drum institutions. This way out of the shaft from the structure to the push-pull type movable shaping drum of thinking limit, compared with the specifications and shaping drum can smaller contraction, is conducive to forming the tire and reduce the tire deformation.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-05-01
Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-01-01
Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods. PMID:18452616
Localization of multiple defects using the compact phased array (CPA) method
NASA Astrophysics Data System (ADS)
Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.
2018-01-01
Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.
Kim, Mooeung; Chung, Hoeil
2013-03-07
The use of selectivity-enhanced Raman spectra of lube base oil (LBO) samples achieved by the spectral collection under frozen conditions at low temperatures was effective for improving accuracy for the determination of the kinematic viscosity at 40 °C (KV@40). A collection of Raman spectra from samples cooled around -160 °C provided the most accurate measurement of KV@40. Components of the LBO samples were mainly long-chain hydrocarbons with molecular structures that were deformable when these were frozen, and the different structural deformabilities of the components enhanced spectral selectivity among the samples. To study the structural variation of components according to the change of sample temperature from cryogenic to ambient condition, n-heptadecane and pristane (2,6,10,14-tetramethylpentadecane) were selected as representative components of LBO samples, and their temperature-induced spectral features as well as the corresponding spectral loadings were investigated. A two-dimensional (2D) correlation analysis was also employed to explain the origin for the improved accuracy. The asynchronous 2D correlation pattern was simplest at the optimal temperature, indicating the occurrence of distinct and selective spectral variations, which enabled the variation of KV@40 of LBO samples to be more accurately assessed.
On Motion Planning and Control of Multi-Link Lightweight Robotic Manipulators
NASA Technical Reports Server (NTRS)
Cetinkunt, Sabri
1987-01-01
A general gross and fine motion planning and control strategy is needed for lightweight robotic manipulator applications such as painting, welding, material handling, surface finishing, and spacecraft servicing. The control problem of lightweight manipulators is to perform fast, accurate, and robust motions despite the payload variations, structural flexibility, and other environmental disturbances. Performance of the rigid manipulator model based computed torque and decoupled joint control methods are determined and simulated for the counterpart flexible manipulators. A counterpart flexible manipulator is defined as a manipulator which has structural flexibility, in addition to having the same inertial, geometric, and actuation properties of a given rigid manipulator. An adaptive model following control (AMFC) algorithm is developed to improve the performance in speed, accuracy, and robustness. It is found that the AMFC improves the speed performance by a factor of two over the conventional non-adaptive control methods for given accuracy requirements while proving to be more robust with respect to payload variations. Yet there are clear limitations on the performance of AMFC alone as well, which are imposed by the arm flexibility. In the search to further improve speed performance while providing a desired accuracy and robustness, a combined control strategy is developed. Furthermore, the problem of switching from one control structure to another during the motion and implementation aspects of combined control are discussed.
Cognitive accuracy and intelligent executive function in the brain and in business.
Bailey, Charles E
2007-11-01
This article reviews research on cognition, language, organizational culture, brain, behavior, and evolution to posit the value of operating with a stable reference point based on cognitive accuracy and a rational bias. Drawing on rational-emotive behavioral science, social neuroscience, and cognitive organizational science on the one hand and a general model of brain and frontal lobe executive function on the other, I suggest implications for organizational success. Cognitive thought processes depend on specific brain structures functioning as effectively as possible under conditions of cognitive accuracy. However, typical cognitive processes in hierarchical business structures promote the adoption and application of subjective organizational beliefs and, thus, cognitive inaccuracies. Applying informed frontal lobe executive functioning to cognition, emotion, and organizational behavior helps minimize the negative effects of indiscriminate application of personal and cultural belief systems to business. Doing so enhances cognitive accuracy and improves communication and cooperation. Organizations operating with cognitive accuracy will tend to respond more nimbly to market pressures and achieve an overall higher level of performance and employee satisfaction.
Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D
2016-07-15
The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Predicting the accuracy of ligand overlay methods with Random Forest models.
Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J
2008-12-01
The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd < or = 2 A as the criteria for success). RF model scores are highly predictive of overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.
Spatial image modulation to improve performance of computed tomography imaging spectrometer
NASA Technical Reports Server (NTRS)
Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)
2010-01-01
Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.
Lee, Du-Hyeong
Implant guide systems can be classified by their supporting structure as tooth-, mucosa-, or bone-supported. Mucosa-supported guides for fully edentulous arches show lower accuracy in implant placement because of errors in image registration and guide positioning. This article introduces the application of a novel microscrew system for computer-aided implant surgery. This technique can markedly improve the accuracy of computer-guided implant surgery in fully edentulous arches by eliminating errors from image fusion and guide positioning.
Improving bridge load rating accuracy.
DOT National Transportation Integrated Search
2013-06-01
Nearly one-quarter of Alabamas bridges are deemed structurally deficient or functionally obsolete. An : additional seven percent of Alabamas bridges were posted bridges in 2010. (Federal Highway Administration, : 2011) Accurate bridge load rati...
2017-01-01
The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1–0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural change may be due to force field deficiencies. The overall accuracy of the empirical methods are slightly improved by annealing the CHARMM structure with ProCS15, which may suggest that the minor structural changes introduced by ProCS15-based annealing improves the accuracy of the protein structures. Having established that QM-based chemical shift prediction can deliver the same accuracy as empirical shift predictors we hope this can help increase the accuracy of related approaches such as QM/MM or linear scaling approaches or interpreting protein structural dynamics from QM-derived chemical shift. PMID:28451325
NASA Astrophysics Data System (ADS)
Yuan, Shenfang; Bao, Qiao; Qiu, Lei; Zhong, Yongteng
2015-10-01
The growing use of composite materials on aircraft structures has attracted much attention for impact monitoring as a kind of structural health monitoring (SHM) method. Multiple signal classification (MUSIC)-based monitoring technology is a promising method because of its directional scanning ability and easy arrangement of the sensor array. However, for applications on real complex structures, some challenges still exist. The impact-induced elastic waves usually exhibit a wide-band performance, giving rise to the difficulty in obtaining the phase velocity directly. In addition, composite structures usually have obvious anisotropy, and the complex structural style of real aircrafts further enhances this performance, which greatly reduces the localization precision of the MUSIC-based method. To improve the MUSIC-based impact monitoring method, this paper first analyzes and demonstrates the influence of measurement precision of the phase velocity on the localization results of the MUSIC impact localization method. In order to improve the accuracy of the phase velocity measurement, a single frequency component extraction method is presented. Additionally, a single frequency component-based re-estimated MUSIC (SFCBR-MUSIC) algorithm is proposed to reduce the localization error caused by the anisotropy of the complex composite structure. The proposed method is verified on a real composite aircraft wing box, which has T-stiffeners and screw holes. Three typical categories of 41 impacts are monitored. Experimental results show that the SFCBR-MUSIC algorithm can localize impact on complex composite structures with an obviously improved accuracy.
Johnston, Jessica C.; Iuliucci, Robbie J.; Facelli, Julio C.; Fitzgerald, George; Mueller, Karl T.
2009-01-01
In order to predict accurately the chemical shift of NMR-active nuclei in solid phase systems, magnetic shielding calculations must be capable of considering the complete lattice structure. Here we assess the accuracy of the density functional theory gauge-including projector augmented wave method, which uses pseudopotentials to approximate the nodal structure of the core electrons, to determine the magnetic properties of crystals by predicting the full chemical-shift tensors of all 13C nuclides in 14 organic single crystals from which experimental tensors have previously been reported. Plane-wave methods use periodic boundary conditions to incorporate the lattice structure, providing a substantial improvement for modeling the chemical shifts in hydrogen-bonded systems. Principal tensor components can now be predicted to an accuracy that approaches the typical experimental uncertainty. Moreover, methods that include the full solid-phase structure enable geometry optimizations to be performed on the input structures prior to calculation of the shielding. Improvement after optimization is noted here even when neutron diffraction data are used for determining the initial structures. After geometry optimization, the isotropic shift can be predicted to within 1 ppm. PMID:19831448
A temperature compensation methodology for piezoelectric based sensor devices
NASA Astrophysics Data System (ADS)
Wang, Dong F.; Lou, Xueqiao; Bao, Aijian; Yang, Xu; Zhao, Ji
2017-08-01
A temperature compensation methodology comprising a negative temperature coefficient thermistor with the temperature characteristics of a piezoelectric material is proposed to improve the measurement accuracy of piezoelectric sensing based devices. The piezoelectric disk is characterized by using a disk-shaped structure and is also used to verify the effectiveness of the proposed compensation method. The measured output voltage shows a nearly linear relationship with respect to the applied pressure by introducing the proposed temperature compensation method in a temperature range of 25-65 °C. As a result, the maximum measurement accuracy is observed to be improved by 40%, and the higher the temperature, the more effective the method. The effective temperature range of the proposed method is theoretically analyzed by introducing the constant coefficient of the thermistor (B), the resistance of initial temperature (R0), and the paralleled resistance (Rx). The proposed methodology can not only eliminate the influence of piezoelectric temperature dependent characteristics on the sensing accuracy but also decrease the power consumption of piezoelectric sensing based devices by the simplified sensing structure.
Reduced Fragment Diversity for Alpha and Alpha-Beta Protein Structure Prediction using Rosetta.
Abbass, Jad; Nebel, Jean-Christophe
2017-01-01
Protein structure prediction is considered a main challenge in computational biology. The biannual international competition, Critical Assessment of protein Structure Prediction (CASP), has shown in its eleventh experiment that free modelling target predictions are still beyond reliable accuracy, therefore, much effort should be made to improve ab initio methods. Arguably, Rosetta is considered as the most competitive method when it comes to targets with no homologues. Relying on fragments of length 9 and 3 from known structures, Rosetta creates putative structures by assembling candidate fragments. Generally, the structure with the lowest energy score, also known as first model, is chosen to be the "predicted one". A thorough study has been conducted on the role and diversity of 3-mers involved in Rosetta's model "refinement" phase. Usage of the standard number of 3-mers - i.e. 200 - has been shown to degrade alpha and alpha-beta protein conformations initially achieved by assembling 9-mers. Therefore, a new prediction pipeline is proposed for Rosetta where the "refinement" phase is customised according to a target's structural class prediction. Over 8% improvement in terms of first model structure accuracy is reported for alpha and alpha-beta classes when decreasing the number of 3- mers. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
NASA Astrophysics Data System (ADS)
Lee, Joohwi; Kim, Sun Hyung; Styner, Martin
2016-03-01
The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.
NASA Astrophysics Data System (ADS)
Lange, Thomas; Wörz, Stefan; Rohr, Karl; Schlag, Peter M.
2009-02-01
The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.
Breen, Andrew J; Moody, Michael P; Ceguerra, Anna V; Gault, Baptiste; Araullo-Peters, Vicente J; Ringer, Simon P
2015-12-01
The following manuscript presents a novel approach for creating lattice based models of Sb-doped Si directly from atom probe reconstructions for the purposes of improving information on dopant positioning and directly informing quantum mechanics based materials modeling approaches. Sophisticated crystallographic analysis techniques are used to detect latent crystal structure within the atom probe reconstructions with unprecedented accuracy. A distortion correction algorithm is then developed to precisely calibrate the detected crystal structure to the theoretically known diamond cubic lattice. The reconstructed atoms are then positioned on their most likely lattice positions. Simulations are then used to determine the accuracy of such an approach and show that improvements to short-range order measurements are possible for noise levels and detector efficiencies comparable with experimentally collected atom probe data. Copyright © 2015 Elsevier B.V. All rights reserved.
Design of measuring system for wire diameter based on sub-pixel edge detection algorithm
NASA Astrophysics Data System (ADS)
Chen, Yudong; Zhou, Wang
2016-09-01
Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.
Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang
2011-01-01
I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036
Improving transmembrane protein consensus topology prediction using inter-helical interaction.
Wang, Han; Zhang, Chao; Shi, Xiaohu; Zhang, Li; Zhou, You
2012-11-01
Alpha helix transmembrane proteins (αTMPs) represent roughly 30% of all open reading frames (ORFs) in a typical genome and are involved in many critical biological processes. Due to the special physicochemical properties, it is hard to crystallize and obtain high resolution structures experimentally, thus, sequence-based topology prediction is highly desirable for the study of transmembrane proteins (TMPs), both in structure prediction and function prediction. Various model-based topology prediction methods have been developed, but the accuracy of those individual predictors remain poor due to the limitation of the methods or the features they used. Thus, the consensus topology prediction method becomes practical for high accuracy applications by combining the advances of the individual predictors. Here, based on the observation that inter-helical interactions are commonly found within the transmembrane helixes (TMHs) and strongly indicate the existence of them, we present a novel consensus topology prediction method for αTMPs, CNTOP, which incorporates four top leading individual topology predictors, and further improves the prediction accuracy by using the predicted inter-helical interactions. The method achieved 87% prediction accuracy based on a benchmark dataset and 78% accuracy based on a non-redundant dataset which is composed of polytopic αTMPs. Our method derives the highest topology accuracy than any other individual predictors and consensus predictors, at the same time, the TMHs are more accurately predicted in their length and locations, where both the false positives (FPs) and the false negatives (FNs) decreased dramatically. The CNTOP is available at: http://ccst.jlu.edu.cn/JCSB/cntop/CNTOP.html. Copyright © 2012 Elsevier B.V. All rights reserved.
Powell, Daniel K; Lin, Eaton; Silberzweig, James E; Kagetsu, Nolan J
2014-03-01
To retrospectively compare resident adherence to checklist-style structured reporting for maxillofacial computed tomography (CT) from the emergency department (when required vs. suggested between two programs). To compare radiology resident reporting accuracy before and after introduction of the structured report and assess its ability to decrease the rate of undetected pathology. We introduced a reporting checklist for maxillofacial CT into our dictation software without specific training, requiring it at one program and suggesting it at another. We quantified usage among residents and compared reporting accuracy, before and after counting and categorizing faculty addenda. There was no significant change in resident accuracy in the first few months, with residents acting as their own controls (directly comparing performance with and without the checklist). Adherence to the checklist at program A (where it originated and was required) was 85% of reports compared to 9% of reports at program B (where it was suggested). When using program B as a secondary control, there was no significant difference in resident accuracy with or without using the checklist (comparing different residents using the checklist to those not using the checklist). Our results suggest that there is no automatic value of checklists for improving radiology resident reporting accuracy. They also suggest the importance of focused training, checklist flexibility, and a period of adjustment to a new reporting style. Mandatory checklists were readily adopted by residents but not when simply suggested. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Retrieval of Urban Boundary Layer Structures from Doppler Lidar Data. Part I: Accuracy Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Quanxin; Lin, Ching Long; Calhoun, Ron
2008-01-01
Two coherent Doppler lidars from the US Army Research Laboratory (ARL) and Arizona State University (ASU) were deployed in the Joint Urban 2003 atmospheric dispersion field experiment (JU2003) held in Oklahoma City. The dual lidar data are used to evaluate the accuracy of the four-dimensional variational data assimilation (4DVAR) method and identify the coherent flow structures in the urban boundary layer. The objectives of the study are three-fold. The first objective is to examine the effect of eddy viscosity models on the quality of retrieved velocity data. The second objective is to determine the fidelity of single-lidar 4DVAR and evaluatemore » the difference between single- and dual-lidar retrievals. The third objective is to correlate the retrieved flow structures with the ground building data. It is found that the approach of treating eddy viscosity as part of control variables yields better results than the approach of prescribing viscosity. The ARL single-lidar 4DVAR is able to retrieve radial velocity fields with an accuracy of 98% in the along-beam direction and 80-90% in the cross-beam direction. For the dual-lidar 4DVAR, the accuracy of retrieved radial velocity in the ARL cross-beam direction improves to 90-94%. By using the dual-lidar retrieved data as a reference, the single-lidar 4DVAR is able to recover fluctuating velocity fields with 70-80% accuracy in the along-beam direction and 60-70% accuracy in the cross-beam direction. Large-scale convective roll structures are found in the vicinity of downtown airpark and parks. Vortical structures are identified near the business district. Strong updrafts and downdrafts are also found above a cluster of restaurants.« less
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-03-11
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility.
Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho
2016-01-01
This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Docking and Virtual Screening Strategies for GPCR Drug Discovery.
Beuming, Thijs; Lenselink, Bart; Pala, Daniele; McRobb, Fiona; Repasky, Matt; Sherman, Woody
2015-01-01
Progress in structure determination of G protein-coupled receptors (GPCRs) has made it possible to apply structure-based drug design (SBDD) methods to this pharmaceutically important target class. The quality of GPCR structures available for SBDD projects fall on a spectrum ranging from high resolution crystal structures (<2 Å), where all water molecules in the binding pocket are resolved, to lower resolution (>3 Å) where some protein residues are not resolved, and finally to homology models that are built using distantly related templates. Each GPCR project involves a distinct set of opportunities and challenges, and requires different approaches to model the interaction between the receptor and the ligands. In this review we will discuss docking and virtual screening to GPCRs, and highlight several refinement and post-processing steps that can be used to improve the accuracy of these calculations. Several examples are discussed that illustrate specific steps that can be taken to improve upon the docking and virtual screening accuracy. While GPCRs are a unique target class, many of the methods and strategies outlined in this review are general and therefore applicable to other protein families.
Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David
2015-01-01
There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.
Axisymmetric inlet minimum weight design method
NASA Technical Reports Server (NTRS)
Nadell, Shari-Beth
1995-01-01
An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.
Wognum, S; Bondar, L; Zolnay, A G; Chai, X; Hulshof, M C C M; Hoogeman, M S; Bel, A
2013-02-01
Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumor and the lack of visible anatomical landmarks for validation. The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined for the weighted S-TPS-RPM. The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.
Accuracy Analysis of a Box-wing Theoretical SRP Model
NASA Astrophysics Data System (ADS)
Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui
2016-07-01
For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.
NASA Astrophysics Data System (ADS)
Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo
2017-03-01
Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.
CORAL: aligning conserved core regions across domain families.
Fong, Jessica H; Marchler-Bauer, Aron
2009-08-01
Homologous protein families share highly conserved sequence and structure regions that are frequent targets for comparative analysis of related proteins and families. Many protein families, such as the curated domain families in the Conserved Domain Database (CDD), exhibit similar structural cores. To improve accuracy in aligning such protein families, we propose a profile-profile method CORAL that aligns individual core regions as gap-free units. CORAL computes optimal local alignment of two profiles with heuristics to preserve continuity within core regions. We benchmarked its performance on curated domains in CDD, which have pre-defined core regions, against COMPASS, HHalign and PSI-BLAST, using structure superpositions and comprehensive curator-optimized alignments as standards of truth. CORAL improves alignment accuracy on core regions over general profile methods, returning a balanced score of 0.57 for over 80% of all domain families in CDD, compared with the highest balanced score of 0.45 from other methods. Further, CORAL provides E-values to aid in detecting homologous protein families and, by respecting block boundaries, produces alignments with improved 'readability' that facilitate manual refinement. CORAL will be included in future versions of the NCBI Cn3D/CDTree software, which can be downloaded at http://www.ncbi.nlm.nih.gov/Structure/cdtree/cdtree.shtml. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Dushyanth, N. D.; Suma, M. N.; Latte, Mrityanjaya V.
2016-03-01
Damage in the structure may raise a significant amount of maintenance cost and serious safety problems. Hence detection of the damage at its early stage is of prime importance. The main contribution pursued in this investigation is to propose a generic optimal methodology to improve the accuracy of positioning of the flaw in a structure. This novel approach involves a two-step process. The first step essentially aims at extracting the damage-sensitive features from the received signal, and these extracted features are often termed the damage index or damage indices, serving as an indicator to know whether the damage is present or not. In particular, a multilevel SVM (support vector machine) plays a vital role in the distinction of faulty and healthy structures. Formerly, when a structure is unveiled as a damaged structure, in the subsequent step, the position of the damage is identified using Hilbert-Huang transform. The proposed algorithm has been evaluated in both simulation and experimental tests on a 6061 aluminum plate with dimensions 300 mm × 300 mm × 5 mm which accordingly yield considerable improvement in the accuracy of estimating the position of the flaw.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
CT image segmentation methods for bone used in medical additive manufacturing.
van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan
2018-01-01
The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Improving sub-grid scale accuracy of boundary features in regional finite-difference models
Panday, Sorab; Langevin, Christian D.
2012-01-01
As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.
NASA Technical Reports Server (NTRS)
Berger, P. E.; Thornton, E. A.
1976-01-01
The APAS program a multistation structural synthesis procedure developed to evaluate material, geometry, and configuration with various design criteria usually considered for the primary structure of transport aircraft is described and evaluated. Recommendations to improve accuracy and extend the capabilities of the APAS program are given. Flow diagrams are included.
NASA Astrophysics Data System (ADS)
Su, H.; Yan, X. H.
2017-12-01
Subsurface thermal structure of the global ocean is a key factor that reflects the impact of the global climate variability and change. Accurately determining and describing the global subsurface and deeper ocean thermal structure from satellite measurements is becoming even more important for understanding the ocean interior anomaly and dynamic processes during recent global warming and hiatus. It is essential but challenging to determine the extent to which such surface remote sensing observations can be used to develop information about the global ocean interior. This study proposed a Support Vector Regression (SVR) method to estimate Subsurface Temperature Anomaly (STA) in the global ocean. The SVR model can well estimate the global STA upper 1000 m through a suite of satellite remote sensing observations of sea surface parameters (including Sea Surface Height Anomaly (SSHA), Sea Surface Temperature Anomaly (SSTA), Sea Surface Salinity Anomaly (SSSA) and Sea Surface Wind Anomaly (SSWA)) with in situ Argo data for training and testing at different depth levels. Here, we employed the MSE and R2 to assess SVR performance on the STA estimation. The results from the SVR model were validated for the accuracy and reliability using the worldwide Argo STA data. The average MSE and R2 of the 15 levels are 0.0090 / 0.0086 / 0.0087 and 0.443 / 0.457 / 0.485 for 2-attributes (SSHA, SSTA) / 3-attributes (SSHA, SSTA, SSSA) / 4-attributes (SSHA, SSTA, SSSA, SSWA) SVR, respectively. The estimation accuracy was improved by including SSSA and SSWA for SVR input (MSE decreased by 0.4% / 0.3% and R2 increased by 1.4% / 4.2% on average). While, the estimation accuracy gradually decreased with the increase of the depth from 500 m. The results showed that SSSA and SSWA, in addition to SSTA and SSHA, are useful parameters that can help estimate the subsurface thermal structure, as well as improve the STA estimation accuracy. In future, we can figure out more potential and useful sea surface parameters from satellite remote sensing as input attributes so as to further improve the STA sensing accuracy from machine learning. This study can provide a helpful technique for studying thermal variability in the ocean interior which has played an important role in recent global warming and hiatus from satellite observations over global scale.
Improving the accuracy and usability of Iowa falling weight deflectometer data : [summary].
DOT National Transportation Integrated Search
2013-05-01
Highway agencies periodically evaluate the structural condition of roads as part of their routine maintenance and rehabilitation activities. The falling-weight deflectometer (FWD) test measures road surface deflections resulting from an applied impul...
Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances
Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.
2017-01-01
As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442
Short-term wind speed prediction based on the wavelet transformation and Adaboost neural network
NASA Astrophysics Data System (ADS)
Hai, Zhou; Xiang, Zhu; Haijian, Shao; Ji, Wu
2018-03-01
The operation of the power grid will be affected inevitably with the increasing scale of wind farm due to the inherent randomness and uncertainty, so the accurate wind speed forecasting is critical for the stability of the grid operation. Typically, the traditional forecasting method does not take into account the frequency characteristics of wind speed, which cannot reflect the nature of the wind speed signal changes result from the low generality ability of the model structure. AdaBoost neural network in combination with the multi-resolution and multi-scale decomposition of wind speed is proposed to design the model structure in order to improve the forecasting accuracy and generality ability. The experimental evaluation using the data from a real wind farm in Jiangsu province is given to demonstrate the proposed strategy can improve the robust and accuracy of the forecasted variable.
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.
Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions
NASA Technical Reports Server (NTRS)
Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.
2011-01-01
A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.
Davey, James A; Chica, Roberto A
2015-04-01
Computational protein design (CPD) predictions are highly dependent on the structure of the input template used. However, it is unclear how small differences in template geometry translate to large differences in stability prediction accuracy. Herein, we explored how structural changes to the input template affect the outcome of stability predictions by CPD. To do this, we prepared alternate templates by Rotamer Optimization followed by energy Minimization (ROM) and used them to recapitulate the stability of 84 protein G domain β1 mutant sequences. In the ROM process, side-chain rotamers for wild-type (WT) or mutant sequences are optimized on crystal or nuclear magnetic resonance (NMR) structures prior to template minimization, resulting in alternate structures termed ROM templates. We show that use of ROM templates prepared from sequences known to be stable results predominantly in improved prediction accuracy compared to using the minimized crystal or NMR structures. Conversely, ROM templates prepared from sequences that are less stable than the WT reduce prediction accuracy by increasing the number of false positives. These observed changes in prediction outcomes are attributed to differences in side-chain contacts made by rotamers in ROM templates. Finally, we show that ROM templates prepared from sequences that are unfolded or that adopt a nonnative fold result in the selective enrichment of sequences that are also unfolded or that adopt a nonnative fold, respectively. Our results demonstrate the existence of a rotamer bias caused by the input template that can be harnessed to skew predictions toward sequences displaying desired characteristics. © 2014 The Protein Society.
Improving inflow forecasting into hydropower reservoirs through a complementary modelling framework
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.
2014-10-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead-time is considered within the day-ahead (Elspot) market of the Nordic exchange market. We present here a new approach for issuing hourly reservoir inflow forecasts that aims to improve on existing forecasting models that are in place operationally, without needing to modify the pre-existing approach, but instead formulating an additive or complementary model that is independent and captures the structure the existing model may be missing. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. The procedure presented comprises an error model added on top of an un-alterable constant parameter conceptual model, the models being demonstrated with reference to the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead-times up to 17 h. Season based evaluations indicated that the improvement in inflow forecasts varies across seasons and inflow forecasts in autumn and spring are less successful with the 95% prediction interval bracketing less than 95% of the observations for lead-times beyond 17 h.
High-Accuracy Ring Laser Gyroscopes: Earth Rotation Rate and Relativistic Effects
NASA Astrophysics Data System (ADS)
Beverini, N.; Di Virgilio, A.; Belfi, J.; Ortolan, A.; Schreiber, K. U.; Gebauer, A.; Klügel, T.
2016-06-01
The Gross Ring G is a square ring laser gyroscope, built as a monolithic Zerodur structure with 4 m length on all sides. It has demonstrated that a large ring laser provides a sensitivity high enough to measure the rotational rate of the Earth with a high precision of ΔΩE < 10-8. It is possible to show that further improvement in accuracy could allow the observation of the metric frame dragging, produced by the Earth rotating mass (Lense-Thirring effect), as predicted by General Relativity. Furthermore, it can provide a local measurement of the Earth rotational rate with a sensitivity near to that provided by the international system IERS. The GINGER project is intending to take this level of sensitivity further and to improve the accuracy and the long-term stability. A monolithic structure similar to the G ring laser is not available for GINGER. Therefore the preliminary goal is the demonstration of the feasibility of a larger gyroscope structure, where the mechanical stability is obtained through an active control of the geometry. A prototype moderate size gyroscope (GP-2) has been set up in Pisa in order to test this active control of the ring geometry, while a second structure (GINGERino) has been installed inside the Gran Sasso underground laboratory in order to investigate the properties of a deep underground laboratory in view of an installation of a future GINGER apparatus. The preliminary data on these two latter instruments are presented.
Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models
NASA Astrophysics Data System (ADS)
Zang, Tianwu
Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.
Pairwise graphical models for structural health monitoring with dense sensor arrays
NASA Astrophysics Data System (ADS)
Mohammadi Ghazi, Reza; Chen, Justin G.; Büyüköztürk, Oral
2017-09-01
Through advances in sensor technology and development of camera-based measurement techniques, it has become affordable to obtain high spatial resolution data from structures. Although measured datasets become more informative by increasing the number of sensors, the spatial dependencies between sensor data are increased at the same time. Therefore, appropriate data analysis techniques are needed to handle the inference problem in presence of these dependencies. In this paper, we propose a novel approach that uses graphical models (GM) for considering the spatial dependencies between sensor measurements in dense sensor networks or arrays to improve damage localization accuracy in structural health monitoring (SHM) application. Because there are always unobserved damaged states in this application, the available information is insufficient for learning the GMs. To overcome this challenge, we propose an approximated model that uses the mutual information between sensor measurements to learn the GMs. The study is backed by experimental validation of the method on two test structures. The first is a three-story two-bay steel model structure that is instrumented by MEMS accelerometers. The second experimental setup consists of a plate structure and a video camera to measure the displacement field of the plate. Our results show that considering the spatial dependencies by the proposed algorithm can significantly improve damage localization accuracy.
Shamim, Mohammad Tabrez Anwar; Anwaruddin, Mohammad; Nagarajaram, H A
2007-12-15
Fold recognition is a key step in the protein structure discovery process, especially when traditional sequence comparison methods fail to yield convincing structural homologies. Although many methods have been developed for protein fold recognition, their accuracies remain low. This can be attributed to insufficient exploitation of fold discriminatory features. We have developed a new method for protein fold recognition using structural information of amino acid residues and amino acid residue pairs. Since protein fold recognition can be treated as a protein fold classification problem, we have developed a Support Vector Machine (SVM) based classifier approach that uses secondary structural state and solvent accessibility state frequencies of amino acids and amino acid pairs as feature vectors. Among the individual properties examined secondary structural state frequencies of amino acids gave an overall accuracy of 65.2% for fold discrimination, which is better than the accuracy by any method reported so far in the literature. Combination of secondary structural state frequencies with solvent accessibility state frequencies of amino acids and amino acid pairs further improved the fold discrimination accuracy to more than 70%, which is approximately 8% higher than the best available method. In this study we have also tested, for the first time, an all-together multi-class method known as Crammer and Singer method for protein fold classification. Our studies reveal that the three multi-class classification methods, namely one versus all, one versus one and Crammer and Singer method, yield similar predictions. Dataset and stand-alone program are available upon request.
SA36. Atypical Memory Structure Related to Recollective Ability
Greenland-White, Sarah; Niendam, Tara
2017-01-01
Abstract Background: People with schizophrenia have impaired recognition memory and disproportionate recollection rather than familiarity deficits. This pattern also occurs in individuals with early psychosis (EP) and those at clinical high risk (CHR; Ragland et al., 2016). Additionally, these groups show atypical relationships between different memory processes, with patients demonstrating a stronger reliance on familiarity to support recognition accuracy. However, it is unclear whether these group differences represent a compensatory “trade-off” in memory strategies, whereby patients adopt an overreliance on familiarity to compensate for impaired recollection. We examined data from the Relational and Item-Specific memory task (RiSE) in healthy control (HC), EP and CHR participants, and contrasted subgroups with and without prominent recollection impairments. Interrelations between these memory processes (accuracy, recollection, and familiarity) were examined with Structural Equation Modeling (SEM). Methods: A total of 181 individuals (57 HC, 101 EP, and 21 CHR) completed the RiSE. Measures of recognition accuracy, familiarity, and recollection were computed. We divided the patient group into those with poor recollection (overall d’ recognition accuracy < 1.5, n = 52) and those with good recollection (overall d’ recollection accuracy ≥ 1.5, n = 70). SEM was used to investigate the pattern of memory relationships between HC and patient groups as well as between patients with good versus bad recollection. Results: Recollection and familiarity were negatively correlated in the HC group (r = −.467, P < .01) and in the patient group, though more weakly (r = −.288,P < .05). Improved recollection was correlated with overall improvement in recognition accuracy for both the groups (HC r = .771, P < .01; r = .753, P < .01). Improved familiarity was associated with higher recognition accuracy in the patient group only (.361, P < .01). Moreover, patients with poor recollection showed a stronger association (Fisher’s Z = 2.58, P < .01) between familiarity performance and recognition accuracy (.718, P < .01) than patients with good recollection performance (.396, P < .01). Conclusion: Results suggest that patients may be overrelying on more intact familiarity processes to support recognition accuracy. This potential compensatory strategy is particularly marked in those patients with the worst recollection abilities. The finding that recognition accuracy remains impaired in both patient subgroups, however, reveals that this compensatory familiarity-based strategy is not fully successful. Further work is needed to understand how patients can be remediated for their consistently impaired recollection processes.
Overcoming complexities: Damage detection using dictionary learning framework
NASA Astrophysics Data System (ADS)
Alguri, K. Supreet; Melville, Joseph; Deemer, Chris; Harley, Joel B.
2018-04-01
For in situ damage detection, guided wave structural health monitoring systems have been widely researched due to their ability to evaluate large areas and their ability detect many types of damage. These systems often evaluate structural health by recording initial baseline measurements from a pristine (i.e., undamaged) test structure and then comparing later measurements with that baseline. Yet, it is not always feasible to have a pristine baseline. As an alternative, substituting the baseline with data from a surrogate (nearly identical and pristine) structure is a logical option. While effective in some circumstance, surrogate data is often still a poor substitute for pristine baseline measurements due to minor differences between the structures. To overcome this challenge, we present a dictionary learning framework to adapt surrogate baseline data to better represent an undamaged test structure. We compare the performance of our framework with two other surrogate-based damage detection strategies: (1) using raw surrogate data for comparison and (2) using sparse wavenumber analysis, a precursor to our framework for improving the surrogate data. We apply our framework to guided wave data from two 108 mm by 108 mm aluminum plates. With 20 measurements, we show that our dictionary learning framework achieves a 98% accuracy, raw surrogate data achieves a 92% accuracy, and sparse wavenumber analysis achieves a 57% accuracy.
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.
Yang, Wan; Olson, Donald R; Shaman, Jeffrey
2016-11-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.
Bayesian model aggregation for ensemble-based estimates of protein pKa values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosink, Luke J.; Hogan, Emilie A.; Pulsipher, Trenton C.
2014-03-01
This paper investigates an ensemble-based technique called Bayesian Model Averaging (BMA) to improve the performance of protein amino acid pmore » $$K_a$$ predictions. Structure-based p$$K_a$$ calculations play an important role in the mechanistic interpretation of protein structure and are also used to determine a wide range of protein properties. A diverse set of methods currently exist for p$$K_a$$ prediction, ranging from empirical statistical models to {\\it ab initio} quantum mechanical approaches. However, each of these methods are based on a set of assumptions that have inherent bias and sensitivities that can effect a model's accuracy and generalizability for p$$K_a$$ prediction in complicated biomolecular systems. We use BMA to combine eleven diverse prediction methods that each estimate pKa values of amino acids in staphylococcal nuclease. These methods are based on work conducted for the pKa Cooperative and the pKa measurements are based on experimental work conducted by the Garc{\\'i}a-Moreno lab. Our study demonstrates that the aggregated estimate obtained from BMA outperforms all individual prediction methods in our cross-validation study with improvements from 40-70\\% over other method classes. This work illustrates a new possible mechanism for improving the accuracy of p$$K_a$$ prediction and lays the foundation for future work on aggregate models that balance computational cost with prediction accuracy.« less
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F + H2 yields HF + H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1988-01-01
Recent advances in electronic structure theory and the availability of high speed vector processors have substantially increased the accuracy of ab initio potential energy surfaces. The recently developed atomic natural orbital approach for basis set contraction has reduced both the basis set incompleteness and superposition errors in molecular calculations. Furthermore, full CI calculations can often be used to calibrate a CASSCF/MRCI approach that quantitatively accounts for the valence correlation energy. These computational advances also provide a vehicle for systematically improving the calculations and for estimating the residual error in the calculations. Calculations on selected diatomic and triatomic systems will be used to illustrate the accuracy that currently can be achieved for molecular systems. In particular, the F+H2 yields HF+H potential energy hypersurface is used to illustrate the impact of these computational advances on the calculation of potential energy surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wognum, S.; Chai, X.; Hulshof, M. C. C. M.
2013-02-15
Purpose: Future developments in image guided adaptive radiotherapy (IGART) for bladder cancer require accurate deformable image registration techniques for the precise assessment of tumor and bladder motion and deformation that occur as a result of large bladder volume changes during the course of radiotherapy treatment. The aim was to employ an extended version of a point-based deformable registration algorithm that allows control over tissue-specific flexibility in combination with the authors' unique patient dataset, in order to overcome two major challenges of bladder cancer registration, i.e., the difficulty in accounting for the difference in flexibility between the bladder wall and tumormore » and the lack of visible anatomical landmarks for validation. Methods: The registration algorithm used in the current study is an extension of the symmetric-thin plate splines-robust point matching (S-TPS-RPM) algorithm, a symmetric feature-based registration method. The S-TPS-RPM algorithm has been previously extended to allow control over the degree of flexibility of different structures via a weight parameter. The extended weighted S-TPS-RPM algorithm was tested and validated on CT data (planning- and four to five repeat-CTs) of five urinary bladder cancer patients who received lipiodol injections before radiotherapy. The performance of the weighted S-TPS-RPM method, applied to bladder and tumor structures simultaneously, was compared with a previous version of the S-TPS-RPM algorithm applied to bladder wall structure alone and with a simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. Performance was assessed in terms of anatomical and geometric accuracy. The anatomical accuracy was calculated as the residual distance error (RDE) of the lipiodol markers and the geometric accuracy was determined by the surface distance, surface coverage, and inverse consistency errors. Optimal parameter values for the flexibility and bladder weight parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.« less
Asymmetric bagging and feature selection for activities prediction of drug molecules.
Li, Guo-Zheng; Meng, Hao-Hua; Lu, Wen-Cong; Yang, Jack Y; Yang, Mary Qu
2008-05-28
Activities of drug molecules can be predicted by QSAR (quantitative structure activity relationship) models, which overcomes the disadvantages of high cost and long cycle by employing the traditional experimental method. With the fact that the number of drug molecules with positive activity is rather fewer than that of negatives, it is important to predict molecular activities considering such an unbalanced situation. Here, asymmetric bagging and feature selection are introduced into the problem and asymmetric bagging of support vector machines (asBagging) is proposed on predicting drug activities to treat the unbalanced problem. At the same time, the features extracted from the structures of drug molecules affect prediction accuracy of QSAR models. Therefore, a novel algorithm named PRIFEAB is proposed, which applies an embedded feature selection method to remove redundant and irrelevant features for asBagging. Numerical experimental results on a data set of molecular activities show that asBagging improve the AUC and sensitivity values of molecular activities and PRIFEAB with feature selection further helps to improve the prediction ability. Asymmetric bagging can help to improve prediction accuracy of activities of drug molecules, which can be furthermore improved by performing feature selection to select relevant features from the drug molecules data sets.
Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1997-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.
Extracting the Textual and Temporal Structure of Supercomputing Logs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, S; Singh, I; Chandra, A
2009-05-26
Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an onlinemore » clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.« less
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
Assessing clinical reasoning (ASCLIRE): Instrument development and validation.
Kunina-Habenicht, Olga; Hautz, Wolf E; Knigge, Michel; Spies, Claudia; Ahlers, Olaf
2015-12-01
Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the entire clinical decision process. In a cross-sectional study participants were asked to choose as many diagnostic measures as they deemed necessary to diagnose the underlying disease of six different cases with acute or sub-acute dyspnea and provide a diagnosis. 283 students and 20 content experts participated. In addition to diagnostic accuracy, respective decision time and number of used relevant diagnostic measures were documented as distinct performance indicators. The empirical structure of the test was investigated using a structural equation modeling approach. Experts showed higher accuracy rates and lower decision times than students. In a cross-sectional comparison, the diagnostic accuracy of students improved with the year of study. Wrong diagnoses provided by our sample were comparable to wrong diagnoses in practice. We found an excellent fit for a model with three latent factors-diagnostic accuracy, decision time, and choice of relevant diagnostic information-with diagnostic accuracy showing no significant correlation with decision time. ASCLIRE considers decision time as an important performance indicator beneath diagnostic accuracy and provides evidence that clinical reasoning is a complex ability comprising diagnostic accuracy, decision time, and choice of relevant diagnostic information as three partly correlated but still distinct aspects.
Chen, Xiang; He, Si-Min; Bu, Dongbo; Zhang, Fa; Wang, Zhiyong; Chen, Runsheng; Gao, Wen
2008-09-15
RNA secondary structures with pseudoknots are often predicted by minimizing free energy, which is proved to be NP-hard. Due to kinetic reasons the real RNA secondary structure often has local instead of global minimum free energy. This implies that we may improve the performance of RNA secondary structure prediction by taking kinetics into account and minimize free energy in a local area. we propose a novel algorithm named FlexStem to predict RNA secondary structures with pseudoknots. Still based on MFE criterion, FlexStem adopts comprehensive energy models that allow complex pseudoknots. Unlike classical thermodynamic methods, our approach aims to simulate the RNA folding process by successive addition of maximal stems, reducing the search space while maintaining or even improving the prediction accuracy. This reduced space is constructed by our maximal stem strategy and stem-adding rule induced from elaborate statistical experiments on real RNA secondary structures. The strategy and the rule also reflect the folding characteristic of RNA from a new angle and help compensate for the deficiency of merely relying on MFE in RNA structure prediction. We validate FlexStem by applying it to tRNAs, 5SrRNAs and a large number of pseudoknotted structures and compare it with the well-known algorithms such as RNAfold, PKNOTS, PknotsRG, HotKnots and ILM according to their overall sensitivities and specificities, as well as positive and negative controls on pseudoknots. The results show that FlexStem significantly increases the prediction accuracy through its local search strategy. Software is available at http://pfind.ict.ac.cn/FlexStem/. Supplementary data are available at Bioinformatics online.
Equivalent plate modeling for conceptual design of aircraft wing structures
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1995-01-01
This paper describes an analysis method that generates conceptual-level design data for aircraft wing structures. A key requirement is that this data must be produced in a timely manner so that is can be used effectively by multidisciplinary synthesis codes for performing systems studies. Such a capability is being developed by enhancing an equivalent plate structural analysis computer code to provide a more comprehensive, robust and user-friendly analysis tool. The paper focuses on recent enhancements to the Equivalent Laminated Plate Solution (ELAPS) analysis code that significantly expands the modeling capability and improves the accuracy of results. Modeling additions include use of out-of-plane plate segments for representing winglets and advanced wing concepts such as C-wings along with a new capability for modeling the internal rib and spar structure. The accuracy of calculated results is improved by including transverse shear effects in the formulation and by using multiple sets of assumed displacement functions in the analysis. Typical results are presented to demonstrate these new features. Example configurations include a C-wing transport aircraft, a representative fighter wing and a blended-wing-body transport. These applications are intended to demonstrate and quantify the benefits of using equivalent plate modeling of wing structures during conceptual design.
Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network
NASA Astrophysics Data System (ADS)
Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao
2018-03-01
Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.
Segmentation of brain structures in presence of a space-occupying lesion.
Pollo, Claudio; Cuadra, Meritxell Bach; Cuisenaire, Olivier; Villemure, Jean-Guy; Thiran, Jean-Philippe
2005-02-15
Brain deformations induced by space-occupying lesions may result in unpredictable position and shape of functionally important brain structures. The aim of this study is to propose a method for segmentation of brain structures by deformation of a segmented brain atlas in presence of a space-occupying lesion. Our approach is based on an a priori model of lesion growth (MLG) that assumes radial expansion from a seeding point and involves three steps: first, an affine registration bringing the atlas and the patient into global correspondence; then, the seeding of a synthetic tumor into the brain atlas providing a template for the lesion; finally, the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. The method was applied on two meningiomas inducing a pure displacement of the underlying brain structures, and segmentation accuracy of ventricles and basal ganglia was assessed. Results show that the segmented structures were consistent with the patient's anatomy and that the deformation accuracy of surrounding brain structures was highly dependent on the accurate placement of the tumor seeding point. Further improvements of the method will optimize the segmentation accuracy. Visualization of brain structures provides useful information for therapeutic consideration of space-occupying lesions, including surgical, radiosurgical, and radiotherapeutic planning, in order to increase treatment efficiency and prevent neurological damage.
Comparison of modal identification techniques using a hybrid-data approach
NASA Technical Reports Server (NTRS)
Pappa, Richard S.
1986-01-01
Modal identification of seemingly simple structures, such as the generic truss is often surprisingly difficult in practice due to high modal density, nonlinearities, and other nonideal factors. Under these circumstances, different data analysis techniques can generate substantially different results. The initial application of a new hybrid-data method for studying the performance characteristics of various identification techniques with such data is summarized. This approach offers new pieces of information for the system identification researcher. First, it allows actual experimental data to be used in the studies, while maintaining the traditional advantage of using simulated data. That is, the identification technique under study is forced to cope with the complexities of real data, yet the performance can be measured unquestionably for the artificial modes because their true parameters are known. Secondly, the accuracy achieved for the true structural modes in the data can be estimated from the accuracy achieved for the artificial modes if the results show similar characteristics. This similarity occurred in the study, for example, for a weak structural mode near 56 Hz. It may even be possible--eventually--to use the error information from the artificial modes to improve the identification accuracy for the structural modes.
Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan
2009-01-01
Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Application of preconditioned alternating direction method of multipliers in depth from focal stack
NASA Astrophysics Data System (ADS)
Javidnia, Hossein; Corcoran, Peter
2018-03-01
Postcapture refocusing effect in smartphone cameras is achievable using focal stacks. However, the accuracy of this effect is totally dependent on the combination of the depth layers in the stack. The accuracy of the extended depth of field effect in this application can be improved significantly by computing an accurate depth map, which has been an open issue for decades. To tackle this issue, a framework is proposed based on a preconditioned alternating direction method of multipliers for depth from the focal stack and synthetic defocus application. In addition to its ability to provide high structural accuracy, the optimization function of the proposed framework can, in fact, converge faster and better than state-of-the-art methods. The qualitative evaluation has been done on 21 sets of focal stacks and the optimization function has been compared against five other methods. Later, 10 light field image sets have been transformed into focal stacks for quantitative evaluation purposes. Preliminary results indicate that the proposed framework has a better performance in terms of structural accuracy and optimization in comparison to the current state-of-the-art methods.
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-01-01
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle. PMID:24921337
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-05-19
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.
2015-08-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.
Cryo-EM image alignment based on nonuniform fast Fourier transform.
Yang, Zhengfan; Penczek, Pawel A
2008-08-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.
Dunn, Nicholas J H; Noid, W G
2016-05-28
This work investigates the promise of a "bottom-up" extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstrate that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative "structure" within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.
Cryo-EM Image Alignment Based on Nonuniform Fast Fourier Transform
Yang, Zhengfan; Penczek, Pawel A.
2008-01-01
In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform Fast Fourier Transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis. PMID:18499351
Rys, Dawid
2017-01-01
Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy. PMID:28880215
Burnos, Piotr; Rys, Dawid
2017-09-07
Weigh-in-Motion systems are tools to prevent road pavements from the adverse phenomena of vehicle overloading. However, the effectiveness of these systems can be significantly increased by improving weighing accuracy, which is now insufficient for direct enforcement of overloaded vehicles. Field tests show that the accuracy of Weigh-in-Motion axle load sensors installed in the flexible (asphalt) pavements depends on pavement temperature and vehicle speeds. Although this is a known phenomenon, it has not been explained yet. The aim of our study is to fill this gap in the knowledge. The explanation of this phenomena which is presented in the paper is based on pavement/sensors mechanics and the application of the multilayer elastic half-space theory. We show that differences in the distribution of vertical and horizontal stresses in the pavement structure are the cause of vehicle weight measurement errors. These studies are important in terms of Weigh-in-Motion systems for direct enforcement and will help to improve the weighing results accuracy.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Navier-Stokes simulations of slender axisymmetric shapes in supersonic, turbulent flow
NASA Astrophysics Data System (ADS)
Moran, Kenneth J.; Beran, Philip S.
1994-07-01
Computational fluid dynamics is used to study flows about slender, axisymmetric bodies at very high speeds. Numerical experiments are conducted to simulate a broad range of flight conditions. Mach number is varied from 1.5 to 8 and Reynolds number is varied from 1 X 10(exp 6)/m to 10(exp 8)/m. The primary objective is to develop and validate a computational and methodology for the accurate simulation of a wide variety of flow structures. Accurate results are obtained for detached bow shocks, recompression shocks, corner-point expansions, base-flow recirculations, and turbulent boundary layers. Accuracy is assessed through comparison with theory and experimental data; computed surface pressure, shock structure, base-flow structure, and velocity profiles are within measurement accuracy throughout the range of conditions tested. The methodology is both practical and general: general in its applicability, and practicaal in its performance. To achieve high accuracy, modifications to previously reported techniques are implemented in the scheme. These modifications improve computed results in the vicinity of symmetry lines and in the base flow region, including the turbulent wake.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
A structural SVM approach for reference parsing.
Zhang, Xiaoli; Zou, Jie; Le, Daniel X; Thoma, George R
2011-06-09
Automated extraction of bibliographic data, such as article titles, author names, abstracts, and references is essential to the affordable creation of large citation databases. References, typically appearing at the end of journal articles, can also provide valuable information for extracting other bibliographic data. Therefore, parsing individual reference to extract author, title, journal, year, etc. is sometimes a necessary preprocessing step in building citation-indexing systems. The regular structure in references enables us to consider reference parsing a sequence learning problem and to study structural Support Vector Machine (structural SVM), a newly developed structured learning algorithm on parsing references. In this study, we implemented structural SVM and used two types of contextual features to compare structural SVM with conventional SVM. Both methods achieve above 98% token classification accuracy and above 95% overall chunk-level accuracy for reference parsing. We also compared SVM and structural SVM to Conditional Random Field (CRF). The experimental results show that structural SVM and CRF achieve similar accuracies at token- and chunk-levels. When only basic observation features are used for each token, structural SVM achieves higher performance compared to SVM since it utilizes the contextual label features. However, when the contextual observation features from neighboring tokens are combined, SVM performance improves greatly, and is close to that of structural SVM after adding the second order contextual observation features. The comparison of these two methods with CRF using the same set of binary features show that both structural SVM and CRF perform better than SVM, indicating their stronger sequence learning ability in reference parsing.
ERIC Educational Resources Information Center
Green, Debbie; Rosenfeld, Barry; Belfi, Brian
2013-01-01
The current study evaluated the accuracy of the Structured Interview of Reported Symptoms, Second Edition (SIRS-2) in a criterion-group study using a sample of forensic psychiatric patients and a community simulation sample, comparing it to the original SIRS and to results published in the SIRS-2 manual. The SIRS-2 yielded an impressive…
ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R
2017-07-27
Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.
NASA Astrophysics Data System (ADS)
Eaton, M.; Pearson, M.; Lee, W.; Pullin, R.
2015-07-01
The ability to accurately locate damage in any given structure is a highly desirable attribute for an effective structural health monitoring system and could help to reduce operating costs and improve safety. This becomes a far greater challenge in complex geometries and materials, such as modern composite airframes. The poor translation of promising laboratory based SHM demonstrators to industrial environments forms a barrier to commercial up take of technology. The acoustic emission (AE) technique is a passive NDT method that detects elastic stress waves released by the growth of damage. It offers very sensitive damage detection, using a sparse array of sensors to detect and globally locate damage within a structure. However its application to complex structures commonly yields poor accuracy due to anisotropic wave propagation and the interruption of wave propagation by structural features such as holes and thickness changes. This work adopts an empirical mapping technique for AE location, known as Delta T Mapping, which uses experimental training data to account for such structural complexities. The technique is applied to a complex geometry composite aerospace structure undergoing certification testing. The component consists of a carbon fibre composite tube with varying wall thickness and multiple holes, that was loaded under bending. The damage location was validated using X-ray CT scanning and the Delta T Mapping technique was shown to improve location accuracy when compared with commercial algorithms. The onset and progression of damage were monitored throughout the test and used to inform future design iterations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopal, A; Xu, H; Chen, S
Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagationmore » was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.« less
NASA Astrophysics Data System (ADS)
Parks, Helen Frances
This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work to extend POD to structured settings. In particular, we consider systems evolving on Lie groups and make use of canonical coordinates in the reduction process. We see considerable improvement in the accuracy of the reduced model over the usual structure-agnostic POD approach.
Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.
2015-01-01
Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208
Incorporating conditional random fields and active learning to improve sentiment identification.
Zhang, Kunpeng; Xie, Yusheng; Yang, Yi; Sun, Aaron; Liu, Hengchang; Choudhary, Alok
2014-10-01
Many machine learning, statistical, and computational linguistic methods have been developed to identify sentiment of sentences in documents, yielding promising results. However, most of state-of-the-art methods focus on individual sentences and ignore the impact of context on the meaning of a sentence. In this paper, we propose a method based on conditional random fields to incorporate sentence structure and context information in addition to syntactic information for improving sentiment identification. We also investigate how human interaction affects the accuracy of sentiment labeling using limited training data. We propose and evaluate two different active learning strategies for labeling sentiment data. Our experiments with the proposed approach demonstrate a 5%-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
PubChem3D: Conformer generation
2011-01-01
Background PubChem, an open archive for the biological activities of small molecules, provides search and analysis tools to assist users in locating desired information. Many of these tools focus on the notion of chemical structure similarity at some level. PubChem3D enables similarity of chemical structure 3-D conformers to augment the existing similarity of 2-D chemical structure graphs. It is also desirable to relate theoretical 3-D descriptions of chemical structures to experimental biological activity. As such, it is important to be assured that the theoretical conformer models can reproduce experimentally determined bioactive conformations. In the present study, we investigate the effects of three primary conformer generation parameters (the fragment sampling rate, the energy window size, and force field variant) upon the accuracy of theoretical conformer models, and determined optimal settings for PubChem3D conformer model generation and conformer sampling. Results Using the software package OMEGA from OpenEye Scientific Software, Inc., theoretical 3-D conformer models were generated for 25,972 small-molecule ligands, whose 3-D structures were experimentally determined. Different values for primary conformer generation parameters were systematically tested to find optimal settings. Employing a greater fragment sampling rate than the default did not improve the accuracy of the theoretical conformer model ensembles. An ever increasing energy window did increase the overall average accuracy, with rapid convergence observed at 10 kcal/mol and 15 kcal/mol for model building and torsion search, respectively; however, subsequent study showed that an energy threshold of 25 kcal/mol for torsion search resulted in slightly improved results for larger and more flexible structures. Exclusion of coulomb terms from the 94s variant of the Merck molecular force field (MMFF94s) in the torsion search stage gave more accurate conformer models at lower energy windows. Overall average accuracy of reproduction of bioactive conformations was remarkably linear with respect to both non-hydrogen atom count ("size") and effective rotor count ("flexibility"). Using these as independent variables, a regression equation was developed to predict the RMSD accuracy of a theoretical ensemble to reproduce bioactive conformations. The equation was modified to give a minimum RMSD conformer sampling value to help ensure that 90% of the sampled theoretical models should contain at least one conformer within the RMSD sampling value to a "bioactive" conformation. Conclusion Optimal parameters for conformer generation using OMEGA were explored and determined. An equation was developed that provides an RMSD sampling value to use that is based on the relative accuracy to reproduce bioactive conformations. The optimal conformer generation parameters and RMSD sampling values determined are used by the PubChem3D project to generate theoretical conformer models. PMID:21272340
Combining Physicochemical and Evolutionary Information for Protein Contact Prediction
Schneider, Michael; Brock, Oliver
2014-01-01
We introduce a novel contact prediction method that achieves high prediction accuracy by combining evolutionary and physicochemical information about native contacts. We obtain evolutionary information from multiple-sequence alignments and physicochemical information from predicted ab initio protein structures. These structures represent low-energy states in an energy landscape and thus capture the physicochemical information encoded in the energy function. Such low-energy structures are likely to contain native contacts, even if their overall fold is not native. To differentiate native from non-native contacts in those structures, we develop a graph-based representation of the structural context of contacts. We then use this representation to train an support vector machine classifier to identify most likely native contacts in otherwise non-native structures. The resulting contact predictions are highly accurate. As a result of combining two sources of information—evolutionary and physicochemical—we maintain prediction accuracy even when only few sequence homologs are present. We show that the predicted contacts help to improve ab initio structure prediction. A web service is available at http://compbio.robotics.tu-berlin.de/epc-map/. PMID:25338092
NASA Astrophysics Data System (ADS)
Chesley, J. T.; Leier, A. L.; White, S.; Torres, R.
2017-06-01
Recently developed data collection techniques allow for improved characterization of sedimentary outcrops. Here, we outline a workflow that utilizes unmanned aerial vehicles (UAV) and structure-from-motion (SfM) photogrammetry to produce sub-meter-scale outcrop reconstructions in 3-D. SfM photogrammetry uses multiple overlapping images and an image-based terrain extraction algorithm to reconstruct the location of individual points from the photographs in 3-D space. The results of this technique can be used to construct point clouds, orthomosaics, and digital surface models that can be imported into GIS and related software for further study. The accuracy of the reconstructed outcrops, with respect to an absolute framework, is improved with geotagged images or independently gathered ground control points, and the internal accuracy of 3-D reconstructions is sufficient for sub-meter scale measurements. We demonstrate this approach with a case study from central Utah, USA, where UAV-SfM data can help delineate complex features within Jurassic fluvial sandstones.
Lu, Donghuan; Popuri, Karteek; Ding, Gavin Weiguang; Balachandar, Rakesh; Beg, Mirza Faisal
2018-04-09
Alzheimer's Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve early diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1-3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature.
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City
2016-01-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast. PMID:27855155
Schlegel, Claudia; Bonvin, Raphael; Rethans, Jan Joost; van der Vleuten, Cees
2014-10-14
Abstract Introduction: High-stake objective structured clinical examinations (OSCEs) with standardized patients (SPs) should offer the same conditions to all candidates throughout the exam. SP performance should therefore be as close to the original role script as possible during all encounters. In this study, we examined the impact of video in SP training on SPs' role accuracy, investigating how the use of different types of video during SP training improves the accuracy of SP portrayal. Methods: In a randomized post-test, control group design three groups of 12 SPs each with different types of video training and one control group of 12 SPs without video use in SP training were compared. The three intervention groups used role-modeling video, performance-feedback video, or a combination of both. Each SP from each group had four students encounter. Two blinded faculty members rated the 192 video-recorded encounters, using a case-specific rating instrument to assess SPs' role accuracy. Results: SPs trained by video showed significantly (p < 0.001) better role accuracy than SPs trained without video over the four sequential portrayals. There was no difference between the three types of video training. Discussion: Use of video during SP training enhances the accuracy of SP portrayal compared with no video, regardless of the type of video intervention used.
A novel finite volume discretization method for advection-diffusion systems on stretched meshes
NASA Astrophysics Data System (ADS)
Merrick, D. G.; Malan, A. G.; van Rooyen, J. A.
2018-06-01
This work is concerned with spatial advection and diffusion discretization technology within the field of Computational Fluid Dynamics (CFD). In this context, a novel method is proposed, which is dubbed the Enhanced Taylor Advection-Diffusion (ETAD) scheme. The model equation employed for design of the scheme is the scalar advection-diffusion equation, the industrial application being incompressible laminar and turbulent flow. Developed to be implementable into finite volume codes, ETAD places specific emphasis on improving accuracy on stretched structured and unstructured meshes while considering both advection and diffusion aspects in a holistic manner. A vertex-centered structured and unstructured finite volume scheme is used, and only data available on either side of the volume face is employed. This includes the addition of a so-called mesh stretching metric. Additionally, non-linear blending with the existing NVSF scheme was performed in the interest of robustness and stability, particularly on equispaced meshes. The developed scheme is assessed in terms of accuracy - this is done analytically and numerically, via comparison to upwind methods which include the popular QUICK and CUI techniques. Numerical tests involved the 1D scalar advection-diffusion equation, a 2D lid driven cavity and turbulent flow case. Significant improvements in accuracy were achieved, with L2 error reductions of up to 75%.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
Kushibar, Kaisar; Valverde, Sergi; González-Villà, Sandra; Bernal, Jose; Cabezas, Mariano; Oliver, Arnau; Lladó, Xavier
2018-06-15
Sub-cortical brain structure segmentation in Magnetic Resonance Images (MRI) has attracted the interest of the research community for a long time as morphological changes in these structures are related to different neurodegenerative disorders. However, manual segmentation of these structures can be tedious and prone to variability, highlighting the need for robust automated segmentation methods. In this paper, we present a novel convolutional neural network based approach for accurate segmentation of the sub-cortical brain structures that combines both convolutional and prior spatial features for improving the segmentation accuracy. In order to increase the accuracy of the automated segmentation, we propose to train the network using a restricted sample selection to force the network to learn the most difficult parts of the structures. We evaluate the accuracy of the proposed method on the public MICCAI 2012 challenge and IBSR 18 datasets, comparing it with different traditional and deep learning state-of-the-art methods. On the MICCAI 2012 dataset, our method shows an excellent performance comparable to the best participant strategy on the challenge, while performing significantly better than state-of-the-art techniques such as FreeSurfer and FIRST. On the IBSR 18 dataset, our method also exhibits a significant increase in the performance with respect to not only FreeSurfer and FIRST, but also comparable or better results than other recent deep learning approaches. Moreover, our experiments show that both the addition of the spatial priors and the restricted sampling strategy have a significant effect on the accuracy of the proposed method. In order to encourage the reproducibility and the use of the proposed method, a public version of our approach is available to download for the neuroimaging community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Improve the prediction of RNA-binding residues using structural neighbours.
Li, Quan; Cao, Zanxia; Liu, Haiyan
2010-03-01
The interactions between RNA-binding proteins (RBPs) with RNA play key roles in managing some of the cell's basic functions. The identification and prediction of RNA binding sites is important for understanding the RNA-binding mechanism. Computational approaches are being developed to predict RNA-binding residues based on the sequence- or structure-derived features. To achieve higher prediction accuracy, improvements on current prediction methods are necessary. We identified that the structural neighbors of RNA-binding and non-RNA-binding residues have different amino acid compositions. Combining this structure-derived feature with evolutionary (PSSM) and other structural information (secondary structure and solvent accessibility) significantly improves the predictions over existing methods. Using a multiple linear regression approach and 6-fold cross validation, our best model can achieve an overall correct rate of 87.8% and MCC of 0.47, with a specificity of 93.4%, correctly predict 52.4% of the RNA-binding residues for a dataset containing 107 non-homologous RNA-binding proteins. Compared with existing methods, including the amino acid compositions of structure neighbors lead to clearly improvement. A web server was developed for predicting RNA binding residues in a protein sequence (or structure),which is available at http://mcgill.3322.org/RNA/.
Geometric structure of anatase Ti O2(101 )
NASA Astrophysics Data System (ADS)
Treacy, Jon P. W.; Hussain, Hadeel; Torrelles, Xavier; Grinter, David C.; Cabailh, Gregory; Bikondoa, Oier; Nicklin, Christopher; Selcuk, Sencer; Selloni, Annabella; Lindsay, Robert; Thornton, Geoff
2017-02-01
Surface x-ray diffraction has been used to determine the quantitative structure of the (101) termination of anatase Ti O2 . The atomic displacements from the bulk-terminated structure are significantly different from those previously calculated with density functional theory (DFT) methods with discrepancies for the Ti displacements in the [10 1 ¯] direction of up to 0.3 Å . DFT calculations carried out as part of the current paper provide a much better agreement through improved accuracy and thicker slab models.
Erdodi, Laszlo A; Tyson, Bradley T; Shahein, Ayman G; Lichtenstein, Jonathan D; Abeare, Christopher A; Pelletier, Chantalle L; Zuccato, Brandon G; Kucharski, Brittany; Roth, Robert M
2017-05-01
The Recognition Memory Test (RMT) and Word Choice Test (WCT) are structurally similar, but psychometrically different. Previous research demonstrated that adding a time-to-completion cutoff improved the classification accuracy of the RMT. However, the contribution of WCT time-cutoffs to improve the detection of invalid responding has not been investigated. The present study was designed to evaluate the classification accuracy of time-to-completion on the WCT compared to the accuracy score and the RMT. Both tests were administered to 202 adults (M age = 45.3 years, SD = 16.8; 54.5% female) clinically referred for neuropsychological assessment in counterbalanced order as part of a larger battery of cognitive tests. Participants obtained lower and more variable scores on the RMT (M = 44.1, SD = 7.6) than on the WCT (M = 46.9, SD = 5.7). Similarly, they took longer to complete the recognition trial on the RMT (M = 157.2 s,SD = 71.8) than the WCT (M = 137.2 s, SD = 75.7). The optimal cutoff on the RMT (≤43) produced .60 sensitivity at .87 specificity. The optimal cutoff on the WCT (≤47) produced .57 sensitivity at .87 specificity. Time-cutoffs produced comparable classification accuracies for both RMT (≥192 s; .48 sensitivity at .88 specificity) and WCT (≥171 s; .49 sensitivity at .91 specificity). They also identified an additional 6-10% of the invalid profiles missed by accuracy score cutoffs, while maintaining good specificity (.93-.95). Functional equivalence was reached at accuracy scores ≤43 (RMT) and ≤47 (WCT) or time-to-completion ≥192 s (RMT) and ≥171 s (WCT). Time-to-completion cutoffs are valuable additions to both tests. They can function as independent validity indicators or enhance the sensitivity of accuracy scores without requiring additional measures or extending standard administration time.
Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings
Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo
2018-01-01
In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev
2013-01-01
Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491
Scheid, Anika; Nebel, Markus E
2012-07-09
Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case - without sacrificing much of the accuracy of the results. Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms.
2012-01-01
Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case – without sacrificing much of the accuracy of the results. Conclusions Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms. PMID:22776037
NASA Astrophysics Data System (ADS)
Liu, Chao; Yang, Guigeng; Zhang, Yiqun
2015-01-01
The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.
Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM
2013-01-01
Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866
NASA Astrophysics Data System (ADS)
Huang, Xiaokun; Zhang, You; Wang, Jing
2017-03-01
Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.
A deep learning method for early screening of lung cancer
NASA Astrophysics Data System (ADS)
Zhang, Kunpeng; Jiang, Huiqin; Ma, Ling; Gao, Jianbo; Yang, Xiaopeng
2018-04-01
Lung cancer is the leading cause of cancer-related deaths among men. In this paper, we propose a pulmonary nodule detection method for early screening of lung cancer based on the improved AlexNet model. In order to maintain the same image quality as the existing B/S architecture PACS system, we convert the original CT image into JPEG format image by analyzing the DICOM file firstly. Secondly, in view of the large size and complex background of CT chest images, we design the convolution neural network on basis of AlexNet model and sparse convolution structure. At last we train our models on the software named DIGITS which is provided by NVIDIA. The main contribution of this paper is to apply the convolutional neural network for the early screening of lung cancer and improve the screening accuracy by combining the AlexNet model with the sparse convolution structure. We make a series of experiments on the chest CT images using the proposed method, of which the sensitivity and specificity indicates that the method presented in this paper can effectively improve the accuracy of early screening of lung cancer and it has certain clinical significance at the same time.
Distributed Long-Gauge Optical Fiber Sensors Based Self-Sensing FRP Bar for Concrete Structure
Tang, Yongsheng; Wu, Zhishen
2016-01-01
Brillouin scattering-based distributed optical fiber (OF) sensing technique presents advantages for concrete structure monitoring. However, the existence of spatial resolution greatly decreases strain measurement accuracy especially around cracks. Meanwhile, the brittle feature of OF also hinders its further application. In this paper, the distributed OF sensor was firstly proposed as long-gauge sensor to improve strain measurement accuracy. Then, a new type of self-sensing fiber reinforced polymer (FRP) bar was developed by embedding the packaged long-gauge OF sensors into FRP bar, followed by experimental studies on strain sensing, temperature sensing and basic mechanical properties. The results confirmed the superior strain sensing properties, namely satisfied accuracy, repeatability and linearity, as well as excellent mechanical performance. At the same time, the temperature sensing property was not influenced by the long-gauge package, making temperature compensation easy. Furthermore, the bonding performance between self-sensing FRP bar and concrete was investigated to study its influence on the sensing. Lastly, the sensing performance was further verified with static experiments of concrete beam reinforced with the proposed self-sensing FRP bar. Therefore, the self-sensing FRP bar has potential applications for long-term structural health monitoring (SHM) as embedded sensors as well as reinforcing materials for concrete structures. PMID:26927110
Deng, Lei; Fan, Chao; Zeng, Zhiwen
2017-12-28
Direct prediction of the three-dimensional (3D) structures of proteins from one-dimensional (1D) sequences is a challenging problem. Significant structural characteristics such as solvent accessibility and contact number are essential for deriving restrains in modeling protein folding and protein 3D structure. Thus, accurately predicting these features is a critical step for 3D protein structure building. In this study, we present DeepSacon, a computational method that can effectively predict protein solvent accessibility and contact number by using a deep neural network, which is built based on stacked autoencoder and a dropout method. The results demonstrate that our proposed DeepSacon achieves a significant improvement in the prediction quality compared with the state-of-the-art methods. We obtain 0.70 three-state accuracy for solvent accessibility, 0.33 15-state accuracy and 0.74 Pearson Correlation Coefficient (PCC) for the contact number on the 5729 monomeric soluble globular protein dataset. We also evaluate the performance on the CASP11 benchmark dataset, DeepSacon achieves 0.68 three-state accuracy and 0.69 PCC for solvent accessibility and contact number, respectively. We have shown that DeepSacon can reliably predict solvent accessibility and contact number with stacked sparse autoencoder and a dropout approach.
Distributed Long-Gauge Optical Fiber Sensors Based Self-Sensing FRP Bar for Concrete Structure.
Tang, Yongsheng; Wu, Zhishen
2016-02-25
Brillouin scattering-based distributed optical fiber (OF) sensing technique presents advantages for concrete structure monitoring. However, the existence of spatial resolution greatly decreases strain measurement accuracy especially around cracks. Meanwhile, the brittle feature of OF also hinders its further application. In this paper, the distributed OF sensor was firstly proposed as long-gauge sensor to improve strain measurement accuracy. Then, a new type of self-sensing fiber reinforced polymer (FRP) bar was developed by embedding the packaged long-gauge OF sensors into FRP bar, followed by experimental studies on strain sensing, temperature sensing and basic mechanical properties. The results confirmed the superior strain sensing properties, namely satisfied accuracy, repeatability and linearity, as well as excellent mechanical performance. At the same time, the temperature sensing property was not influenced by the long-gauge package, making temperature compensation easy. Furthermore, the bonding performance between self-sensing FRP bar and concrete was investigated to study its influence on the sensing. Lastly, the sensing performance was further verified with static experiments of concrete beam reinforced with the proposed self-sensing FRP bar. Therefore, the self-sensing FRP bar has potential applications for long-term structural health monitoring (SHM) as embedded sensors as well as reinforcing materials for concrete structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang, E-mail: cyzhao@pmo.ac.cn
The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and themore » astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.« less
Magnetic resonance imaging of the preterm infant brain.
Doria, Valentina; Arichi, Tomoki; Edwards, David A
2014-01-01
Despite improvements in neonatal care, survivors of preterm birth are still at a significantly increased risk of developing life-long neurological difficulties including cerebral palsy and cognitive difficulties. Cranial ultrasound is routinely used in neonatal practice, but has a low sensitivity for identifying later neurodevelopmental difficulties. Magnetic Resonance Imaging (MRI) can be used to identify intracranial abnormalities with greater diagnostic accuracy in preterm infants, and theoretically might improve the planning and targeting of long-term neurodevelopmental care; reducing parental stress and unplanned healthcare utilisation; and ultimately may improve healthcare cost effectiveness. Furthermore, MR imaging offers the advantage of allowing the quantitative assessment of the integrity, growth and function of intracranial structures, thereby providing the means to develop sensitive biomarkers which may be predictive of later neurological impairment. However further work is needed to define the accuracy and value of diagnosis by MR and the techniques's precise role in care pathways for preterm infants.
NASA Astrophysics Data System (ADS)
Yu, Zhicheng; Peng, Kai; Liu, Xiaokang; Pu, Hongji; Chen, Ziran
2018-05-01
High-precision displacement sensors, which can measure large displacements with nanometer resolution, are key components in many ultra-precision fabrication machines. In this paper, a new capacitive nanometer displacement sensor with differential sensing structure is proposed for long-range linear displacement measurements based on an approach denoted time grating. Analytical models established using electric field coupling theory and an area integral method indicate that common-mode interference will result in a first-harmonic error in the measurement results. To reduce the common-mode interference, the proposed sensor design employs a differential sensing structure, which adopts a second group of induction electrodes spatially separated from the first group of induction electrodes by a half-pitch length. Experimental results based on a prototype sensor demonstrate that the measurement accuracy and the stability of the sensor are substantially improved after adopting the differential sensing structure. Finally, a prototype sensor achieves a measurement accuracy of ±200 nm over the full 200 mm measurement range of the sensor.
Protein structural similarity search by Ramachandran codes
Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang
2007-01-01
Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377
Improving Translational Accuracy between Dash-Wedge Diagrams and Newman Projections
ERIC Educational Resources Information Center
Hutchison, John M.
2017-01-01
The use of structural representations to convey spatial and chemical information is an integral part of organic chemistry. As a result, students must acquire skills to interpret and translate between the various diagrammatic forms. This article summarizes the skills sets, problem-solving strategies, and identified student difficulties in…
Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks
NASA Astrophysics Data System (ADS)
Zhu, Shijia; Wang, Yadong
2015-12-01
Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.
NASA Astrophysics Data System (ADS)
Luiza Bondar, M.; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-01
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Bondar, M Luiza; Hoogeman, Mischa; Schillemans, Wilco; Heijmen, Ben
2013-08-07
For online adaptive radiotherapy of cervical cancer, fast and accurate image segmentation is required to facilitate daily treatment adaptation. Our aim was twofold: (1) to test and compare three intra-patient automated segmentation methods for the cervix-uterus structure in CT-images and (2) to improve the segmentation accuracy by including prior knowledge on the daily bladder volume or on the daily coordinates of implanted fiducial markers. The tested methods were: shape deformation (SD) and atlas-based segmentation (ABAS) using two non-rigid registration methods: demons and a hierarchical algorithm. Tests on 102 CT-scans of 13 patients demonstrated that the segmentation accuracy significantly increased by including the bladder volume predicted with a simple 1D model based on a manually defined bladder top. Moreover, manually identified implanted fiducial markers significantly improved the accuracy of the SD method. For patients with large cervix-uterus volume regression, the use of CT-data acquired toward the end of the treatment was required to improve segmentation accuracy. Including prior knowledge, the segmentation results of SD (Dice similarity coefficient 85 ± 6%, error margin 2.2 ± 2.3 mm, average time around 1 min) and of ABAS using hierarchical non-rigid registration (Dice 82 ± 10%, error margin 3.1 ± 2.3 mm, average time around 30 s) support their use for image guided online adaptive radiotherapy of cervical cancer.
Expected Improvements in VLBI Measurements of the Earth's Orientation
NASA Technical Reports Server (NTRS)
Ma, Chopo
2003-01-01
Measurements of the Earth s orientation since the 1970s using space geodetic techniques have provided a continually expanding and improving data set for studies of the Earth s structure and the distribution of mass and angular momentum. The accuracy of current one-day measurements is better than 100 microarcsec for the motion of the pole with respect to the celestial and terrestrial reference frames and better than 3 microsec for the rotation around the pole. VLBI uniquely provides the three Earth orientation parameters (nutation and UTI) that relate the Earth to the extragalactic celestial reference frame. The accuracy and resolution of the VLBI Earth orientation time series can be expected to improve substantially in the near future because of refinements in the realization of the celestial reference frame, improved modeling of the troposphere and non-linear station motions, larger observing networks, optimized scheduling, deployment of disk-based Mark V recorders, full use of Mark IV capabilities, and e-VLBI. More radical future technical developments will be discussed.
Improved shallow trench isolation and gate process control using scatterometry based metrology
NASA Astrophysics Data System (ADS)
Rudolph, P.; Bradford, S. M.
2005-05-01
The ability to control critical dimensions of structures on semiconductor devices is essential to improving die yield and device performance. As geometries shrink, accuracy of the metrology equipment has increasingly become a contributing factor to the inability to detect shifts which result in yield loss. Scatterometry provides optical measurement that better enables process control of critical dimensions. Superior precision, accuracy, and higher throughput can be achieved more cost effectively through the use of this technology in production facilities. This paper outlines the implementation of Scatterometry based metrology in a production facility. The accuracy advantage it has over conventional Scanning Electron Microscope (SEM) measurement is presented. The Scatterometry tool used has demonstrated repeatability on the order of 3σ < 1 nm at STI-Etch-FICD for CD and Trench Depth (TD), and Side Wall Angle (SWA) measurements to within 0.1 degrees. Poly CD also shows 3σ < 1 nm, and poly thickness measurement 3σ < 2.5 Å. Scatterometry has capabilities which include measurement of CD, structure height and trench depth, Sidewall angle (SWA), and film thickness. The greater accuracy and the addition of in-situ Trench depth and sidewall angle have provided new measurement capabilities. There are inherent difficulties in implementing scatterometry in production wafer fabs. Difficulties with photo resist measurements, film characterization and stack set-up will be discussed. In addition, there are challenges due to the quantity data generated, in how to organize and store this data effectively. A comparison of the advantages and shortcomings of the method are presented.
Accurate Energies and Orbital Description in Semi-Local Kohn-Sham DFT
NASA Astrophysics Data System (ADS)
Lindmaa, Alexander; Kuemmel, Stephan; Armiento, Rickard
2015-03-01
We present our progress on a scheme in semi-local Kohn-Sham density-functional theory (KS-DFT) for improving the orbital description while still retaining the level of accuracy of the usual semi-local exchange-correlation (xc) functionals. DFT is a widely used tool for first-principles calculations of properties of materials. A given task normally requires a balance of accuracy and computational cost, which is well achieved with semi-local DFT. However, commonly used semi-local xc functionals have important shortcomings which often can be attributed to features of the corresponding xc potential. One shortcoming is an overly delocalized representation of localized orbitals. Recently a semi-local GGA-type xc functional was constructed to address these issues, however, it has the trade-off of lower accuracy of the total energy. We discuss the source of this error in terms of a surplus energy contribution in the functional that needs to be accounted for, and offer a remedy for this issue which formally stays within KS-DFT, and, which does not harshly increase the computational effort. The end result is a scheme that combines accurate total energies (e.g., relaxed geometries) with an improved orbital description (e.g., improved band structure).
Murphy, S F; Lenihan, L; Orefuwa, F; Colohan, G; Hynes, I; Collins, C G
2017-05-01
The discharge letter is a key component of the communication pathway between the hospital and primary care. Accuracy and timeliness of delivery are crucial to ensure continuity of patient care. Electronic discharge summaries (EDS) and prescriptions have been shown to improve quality of discharge information for general practitioners (GPs). The aim of this study was to evaluate the effect of a new EDS on GP satisfaction levels and accuracy of discharge diagnosis. A GP survey was carried out whereby semi-structured interviews were conducted with 13 GPs from three primary care centres who receive a high volume of discharge letters from the hospital. A chart review was carried out on 90 charts to compare accuracy of ICD-10 coding of Non-Consultant Hospital Doctors (NCHDs) with that of trained Hopital In-Patient Enquiry (HIPE) coders. GP satisfaction levels were over 90 % with most aspects of the EDS, including amount of information (97 %), accuracy (95 %), GP information and follow-up (97 %) and medications (91 %). 70 % of GPs received the EDS within 2 weeks. ICD-10 coding of discharge diagnosis by NCHDs had an accuracy of 33 %, compared with 95.6 % when done by trained coders (p < 0.00001). The introduction of the EDS and prescription has led to improved quality of timeliness of communication with primary care. It has led to a very high satisfaction rating with GPs. ICD-10 coding was found to be grossly inaccurate when carried out by NCHDs and it is more appropriate for this task to be carried out by trained coders.
Template-based structure modeling of protein-protein interactions
Szilagyi, Andras; Zhang, Yang
2014-01-01
The structure of protein-protein complexes can be constructed by using the known structure of other protein complexes as a template. The complex structure templates are generally detected either by homology-based sequence alignments or, given the structure of monomer components, by structure-based comparisons. Critical improvements have been made in recent years by utilizing interface recognition and by recombining monomer and complex template libraries. Encouraging progress has also been witnessed in genome-wide applications of template-based modeling, with modeling accuracy comparable to high-throughput experimental data. Nevertheless, bottlenecks exist due to the incompleteness of the proteinprotein complex structure library and the lack of methods for distant homologous template identification and full-length complex structure refinement. PMID:24721449
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candel, Arno; Li, Z.; Ng, C.
The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less
Davey, James A; Chica, Roberto A
2014-05-01
Multistate computational protein design (MSD) with backbone ensembles approximating conformational flexibility can predict higher quality sequences than single-state design with a single fixed backbone. However, it is currently unclear what characteristics of backbone ensembles are required for the accurate prediction of protein sequence stability. In this study, we aimed to improve the accuracy of protein stability predictions made with MSD by using a variety of backbone ensembles to recapitulate the experimentally measured stability of 85 Streptococcal protein G domain β1 sequences. Ensembles tested here include an NMR ensemble as well as those generated by molecular dynamics (MD) simulations, by Backrub motions, and by PertMin, a new method that we developed involving the perturbation of atomic coordinates followed by energy minimization. MSD with the PertMin ensembles resulted in the most accurate predictions by providing the highest number of stable sequences in the top 25, and by correctly binning sequences as stable or unstable with the highest success rate (≈90%) and the lowest number of false positives. The performance of PertMin ensembles is due to the fact that their members closely resemble the input crystal structure and have low potential energy. Conversely, the NMR ensemble as well as those generated by MD simulations at 500 or 1000 K reduced prediction accuracy due to their low structural similarity to the crystal structure. The ensembles tested herein thus represent on- or off-target models of the native protein fold and could be used in future studies to design for desired properties other than stability. Copyright © 2013 Wiley Periodicals, Inc.
Semi-Local DFT Functionals with Exact-Exchange-Like Features: Beyond the AK13
NASA Astrophysics Data System (ADS)
Armiento, Rickard
The Armiento-Kümmel functional from 2013 (AK13) is a non-empirical semi-local exchange functional on generalized gradient approximation form (GGA) in Kohn-Sham (KS) density functional theory (DFT). Recent works have established that AK13 gives improved electronic-structure exchange features over other semi-local methods, with a qualitatively improved orbital description and band structure. For example, the Kohn-Sham band gap is greatly extended, as it is for exact exchange. This talk outlines recent efforts towards new exchange-correlation functionals based on, and extending, the AK13 design ideas. The aim is to improve the quantitative accuracy, the description of energetics, and to address other issues found with the original formulation. Swedish e-Science Research Centre (SeRC).
Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.
Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin
2009-01-01
Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.
Development of a Brillouin scattering based distributed fibre optic strain sensor
NASA Astrophysics Data System (ADS)
Brown, Anthony Wayne
2001-07-01
The parameters of the Brillouin spectrum of an optical fibre depend upon the strain and temperature conditions of the fibre. As a result, fibre optic distributed sensors based on Brillouin scattering can measure strain and temperature in arbitrary regions of a sensing fibre. In the past, such sensors have often been demonstrated under laboratory conditions, demonstrating the principle of operation. Although some field tests of temperature sensing have been reported, the actual deployment of such sensors in the field for strain measurements has been limited by poor spatial resolution (typically 1 m or more) and poor strain accuracy (+/-100 muepsilon). Also, cross-sensitivity of the Brillouin spectrum to temperature further reduces the accuracy of strain measurement while long acquisition times hinders field use. The high level of user knowledge and lack of automation required to operate the equipment is another limiting factor of the only commercially available unit. The potential benefits of distributed measurements are great for instrumentation of civil structures provided that the above limitations are overcome. However, before this system is used with confidence by practitioners, it is essential that it can be effectively operated in field conditions. In light of this, the fibre optics group at the University of New Brunswick has been developing an automated system for field measurement of strain in civil structures, particularly in reinforced concrete. The development of the sensing system hardware and software was the main focus of this thesis. This has been made possible, in part, by observation of the Brillouin spectrum for the case of using very short light pulses (<10 ns). The end product of the development is a sensor with a spatial resolution that has been improved to 100 mm. Measurement techniques that improve system performance to measure strain to an accuracy of 10 muepsilon; and allow the simultaneous measurement of strain and temperature to an accuracy of 204 muepsilon and 3°C are presented. Finally, the results of field measurement of strain on a concrete structure are presented.
Protein docking prediction using predicted protein-protein interface.
Li, Bin; Kihara, Daisuke
2012-01-10
Many important cellular processes are carried out by protein complexes. To provide physical pictures of interacting proteins, many computational protein-protein prediction methods have been developed in the past. However, it is still difficult to identify the correct docking complex structure within top ranks among alternative conformations. We present a novel protein docking algorithm that utilizes imperfect protein-protein binding interface prediction for guiding protein docking. Since the accuracy of protein binding site prediction varies depending on cases, the challenge is to develop a method which does not deteriorate but improves docking results by using a binding site prediction which may not be 100% accurate. The algorithm, named PI-LZerD (using Predicted Interface with Local 3D Zernike descriptor-based Docking algorithm), is based on a pair wise protein docking prediction algorithm, LZerD, which we have developed earlier. PI-LZerD starts from performing docking prediction using the provided protein-protein binding interface prediction as constraints, which is followed by the second round of docking with updated docking interface information to further improve docking conformation. Benchmark results on bound and unbound cases show that PI-LZerD consistently improves the docking prediction accuracy as compared with docking without using binding site prediction or using the binding site prediction as post-filtering. We have developed PI-LZerD, a pairwise docking algorithm, which uses imperfect protein-protein binding interface prediction to improve docking accuracy. PI-LZerD consistently showed better prediction accuracy over alternative methods in the series of benchmark experiments including docking using actual docking interface site predictions as well as unbound docking cases.
Racette, Lyne; Chiou, Christine Y.; Hao, Jiucang; Bowd, Christopher; Goldbaum, Michael H.; Zangwill, Linda M.; Lee, Te-Won; Weinreb, Robert N.; Sample, Pamela A.
2009-01-01
Purpose To investigate whether combining optic disc topography and short-wavelength automated perimetry (SWAP) data improves the diagnostic accuracy of relevance vector machine (RVM) classifiers for detecting glaucomatous eyes compared to using each test alone. Methods One eye of 144 glaucoma patients and 68 healthy controls from the Diagnostic Innovations in Glaucoma Study were included. RVM were trained and tested with cross-validation on optimized (backward elimination) SWAP features (thresholds plus age; pattern deviation (PD); total deviation (TD)) and on Heidelberg Retina Tomograph II (HRT) optic disc topography features, independently and in combination. RVM performance was also compared to two HRT linear discriminant functions (LDF) and to SWAP mean deviation (MD) and pattern standard deviation (PSD). Classifier performance was measured by the area under the receiver operating characteristic curves (AUROCs) generated for each feature set and by the sensitivities at set specificities of 75%, 90% and 96%. Results RVM trained on combined HRT and SWAP thresholds plus age had significantly higher AUROC (0.93) than RVM trained on HRT (0.88) and SWAP (0.76) alone. AUROCs for the SWAP global indices (MD: 0.68; PSD: 0.72) offered no advantage over SWAP thresholds plus age, while the LDF AUROCs were significantly lower than RVM trained on the combined SWAP and HRT feature set and on HRT alone feature set. Conclusions Training RVM on combined optimized HRT and SWAP data improved diagnostic accuracy compared to training on SWAP and HRT parameters alone. Future research may identify other combinations of tests and classifiers that can also improve diagnostic accuracy. PMID:19528827
Thomas, Cibu; Ye, Frank Q; Irfanoglu, M Okan; Modi, Pooja; Saleem, Kadharbatcha S; Leopold, David A; Pierpaoli, Carlo
2014-11-18
Tractography based on diffusion-weighted MRI (DWI) is widely used for mapping the structural connections of the human brain. Its accuracy is known to be limited by technical factors affecting in vivo data acquisition, such as noise, artifacts, and data undersampling resulting from scan time constraints. It generally is assumed that improvements in data quality and implementation of sophisticated tractography methods will lead to increasingly accurate maps of human anatomical connections. However, assessing the anatomical accuracy of DWI tractography is difficult because of the lack of independent knowledge of the true anatomical connections in humans. Here we investigate the future prospects of DWI-based connectional imaging by applying advanced tractography methods to an ex vivo DWI dataset of the macaque brain. The results of different tractography methods were compared with maps of known axonal projections from previous tracer studies in the macaque. Despite the exceptional quality of the DWI data, none of the methods demonstrated high anatomical accuracy. The methods that showed the highest sensitivity showed the lowest specificity, and vice versa. Additionally, anatomical accuracy was highly dependent upon parameters of the tractography algorithm, with different optimal values for mapping different pathways. These results suggest that there is an inherent limitation in determining long-range anatomical projections based on voxel-averaged estimates of local fiber orientation obtained from DWI data that is unlikely to be overcome by improvements in data acquisition and analysis alone.
A novel approach to multiple sequence alignment using hadoop data grids.
Sudha Sadasivam, G; Baktavatchalam, G
2010-01-01
Multiple alignment of protein sequences helps to determine evolutionary linkage and to predict molecular structures. The factors to be considered while aligning multiple sequences are speed and accuracy of alignment. Although dynamic programming algorithms produce accurate alignments, they are computation intensive. In this paper we propose a time efficient approach to sequence alignment that also produces quality alignment. The dynamic nature of the algorithm coupled with data and computational parallelism of hadoop data grids improves the accuracy and speed of sequence alignment. The principle of block splitting in hadoop coupled with its scalability facilitates alignment of very large sequences.
Chow, John W; Stokic, Dobrivoje S
2018-03-01
We examined changes in variability, accuracy, frequency composition, and temporal regularity of force signal from vision-guided to memory-guided force-matching tasks in 17 subacute stroke and 17 age-matched healthy subjects. Subjects performed a unilateral isometric knee extension at 10, 30, and 50% of peak torque [maximum voluntary contraction (MVC)] for 10 s (3 trials each). Visual feedback was removed at the 5-s mark in the first two trials (feedback withdrawal), and 30 s after the second trial the subjects were asked to produce the target force without visual feedback (force recall). The coefficient of variation and constant error were used to quantify force variability and accuracy. Force structure was assessed by the median frequency, relative spectral power in the 0-3-Hz band, and sample entropy of the force signal. At 10% MVC, the force signal in subacute stroke subjects became steadier, more broadband, and temporally more irregular after the withdrawal of visual feedback, with progressively larger error at higher contraction levels. Also, the lack of modulation in the spectral frequency at higher force levels with visual feedback persisted in both the withdrawal and recall conditions. In terms of changes from the visual feedback condition, the feedback withdrawal produced a greater difference between the paretic, nonparetic, and control legs than the force recall. The overall results suggest improvements in force variability and structure from vision- to memory-guided force control in subacute stroke despite decreased accuracy. Different sensory-motor memory retrieval mechanisms seem to be involved in the feedback withdrawal and force recall conditions, which deserves further study. NEW & NOTEWORTHY We demonstrate that in the subacute phase of stroke, force signals during a low-level isometric knee extension become steadier, more broadband in spectral power, and more complex after removal of visual feedback. Larger force errors are produced when recalling target forces than immediately after withdrawing visual feedback. Although visual feedback offers better accuracy, it worsens force variability and structure in subacute stroke. The feedback withdrawal and force recall conditions seem to involve different memory retrieval mechanisms.
Low, Yen S.; Sedykh, Alexander; Rusyn, Ivan; Tropsha, Alexander
2017-01-01
Cheminformatics approaches such as Quantitative Structure Activity Relationship (QSAR) modeling have been used traditionally for predicting chemical toxicity. In recent years, high throughput biological assays have been increasingly employed to elucidate mechanisms of chemical toxicity and predict toxic effects of chemicals in vivo. The data generated in such assays can be considered as biological descriptors of chemicals that can be combined with molecular descriptors and employed in QSAR modeling to improve the accuracy of toxicity prediction. In this review, we discuss several approaches for integrating chemical and biological data for predicting biological effects of chemicals in vivo and compare their performance across several data sets. We conclude that while no method consistently shows superior performance, the integrative approaches rank consistently among the best yet offer enriched interpretation of models over those built with either chemical or biological data alone. We discuss the outlook for such interdisciplinary methods and offer recommendations to further improve the accuracy and interpretability of computational models that predict chemical toxicity. PMID:24805064
A three degree of freedom manipulator used for store separation wind tunnel test
NASA Astrophysics Data System (ADS)
Wei, R.; Che, B.-H.; Sun, C.-B.; Zhang, J.; Lu, Y.-Q.
2018-06-01
A three degree of freedom manipulator is presented, which is used for store separation wind tunnel test. It is a kind of mechatronics product, have small volume and large moment of torque. The paper researched the design principle of wind tunnel test equipment, also introduced the transmission principle design, physical design, control system design, drive element selection calculation and verification, dynamics computation and static structural computation of the manipulator. To satisfy the design principle of wind tunnel test equipment, some optimization design are made include optimizes the structure of drive element and cable, fairing configuration, overall dimension so that to make the device more suitable for the wind tunnel test. Some tests are made to verify the parameters of the manipulator. The results show that the device improves the load from 100 Nm to 250 Nm, control accuracy from 0.1°to 0.05°in pitch and yaw, also improves load from 10 Nm to 20 Nm, control accuracy from 0.1°to 0.05°in roll.
NASA Astrophysics Data System (ADS)
Bu, Haifeng; Wang, Dansheng; Zhou, Pin; Zhu, Hongping
2018-04-01
An improved wavelet-Galerkin (IWG) method based on the Daubechies wavelet is proposed for reconstructing the dynamic responses of shear structures. The proposed method flexibly manages wavelet resolution level according to excitation, thereby avoiding the weakness of the wavelet-Galerkin multiresolution analysis (WGMA) method in terms of resolution and the requirement of external excitation. IWG is implemented by this work in certain case studies, involving single- and n-degree-of-freedom frame structures subjected to a determined discrete excitation. Results demonstrate that IWG performs better than WGMA in terms of accuracy and computation efficiency. Furthermore, a new method for parameter identification based on IWG and an optimization algorithm are also developed for shear frame structures, and a simultaneous identification of structural parameters and excitation is implemented. Numerical results demonstrate that the proposed identification method is effective for shear frame structures.
Monitoring the refinement of crystal structures with {sup 15}N solid-state NMR shift tensor data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalakewich, Keyton; Eloranta, Harriet; Harper, James K.
The {sup 15}N chemical shift tensor is shown to be extremely sensitive to lattice structure and a powerful metric for monitoring density functional theory refinements of crystal structures. These refinements include lattice effects and are applied here to five crystal structures. All structures improve based on a better agreement between experimental and calculated {sup 15}N tensors, with an average improvement of 47.0 ppm. Structural improvement is further indicated by a decrease in forces on the atoms by 2–3 orders of magnitude and a greater similarity in atom positions to neutron diffraction structures. These refinements change bond lengths by more thanmore » the diffraction errors including adjustments to X–Y and X–H bonds (X, Y = C, N, and O) of 0.028 ± 0.002 Å and 0.144 ± 0.036 Å, respectively. The acquisition of {sup 15}N tensors at natural abundance is challenging and this limitation is overcome by improved {sup 1}H decoupling in the FIREMAT method. This decoupling dramatically narrows linewidths, improves signal-to-noise by up to 317%, and significantly improves the accuracy of measured tensors. A total of 39 tensors are measured with shifts distributed over a range of more than 400 ppm. Overall, experimental {sup 15}N tensors are at least 5 times more sensitive to crystal structure than {sup 13}C tensors due to nitrogen’s greater polarizability and larger range of chemical shifts.« less
Can verbal working memory training improve reading?
Banales, Erin; Kohnen, Saskia; McArthur, Genevieve
2015-01-01
The aim of the current study was to determine whether poor verbal working memory is associated with poor word reading accuracy because the former causes the latter, or the latter causes the former. To this end, we tested whether (a) verbal working memory training improves poor verbal working memory or poor word reading accuracy, and whether (b) reading training improves poor reading accuracy or verbal working memory in a case series of four children with poor word reading accuracy and verbal working memory. Each child completed 8 weeks of verbal working memory training and 8 weeks of reading training. Verbal working memory training improved verbal working memory in two of the four children, but did not improve their reading accuracy. Similarly, reading training improved word reading accuracy in all children, but did not improve their verbal working memory. These results suggest that the causal links between verbal working memory and reading accuracy may not be as direct as has been assumed.
Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models
NASA Technical Reports Server (NTRS)
Buchert, T.; Melott, A. L.; Weiss, A. G.
1993-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.
Automatic classification of protein structures using physicochemical parameters.
Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam
2014-09-01
Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.
Peng, Jiajie; Zhang, Xuanshuo; Hui, Weiwei; Lu, Junya; Li, Qianqian; Liu, Shuhui; Shang, Xuequn
2018-03-19
Gene Ontology (GO) is one of the most popular bioinformatics resources. In the past decade, Gene Ontology-based gene semantic similarity has been effectively used to model gene-to-gene interactions in multiple research areas. However, most existing semantic similarity approaches rely only on GO annotations and structure, or incorporate only local interactions in the co-functional network. This may lead to inaccurate GO-based similarity resulting from the incomplete GO topology structure and gene annotations. We present NETSIM2, a new network-based method that allows researchers to measure GO-based gene functional similarities by considering the global structure of the co-functional network with a random walk with restart (RWR)-based method, and by selecting the significant term pairs to decrease the noise information. Based on the EC number (Enzyme Commission)-based groups of yeast and Arabidopsis, evaluation test shows that NETSIM2 can enhance the accuracy of Gene Ontology-based gene functional similarity. Using NETSIM2 as an example, we found that the accuracy of semantic similarities can be significantly improved after effectively incorporating the global gene-to-gene interactions in the co-functional network, especially on the species that gene annotations in GO are far from complete.
Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction
Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian
2017-01-01
Abstract Motivation: Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. Results: We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Availability and Implementation: Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. Contact: deane@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28453681
Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.
Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M
2017-05-01
Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
Zhang, Qinjin; Liu, Yancheng; Zhao, Youtao; Wang, Ning
2016-03-01
Multi-mode operation and transient stability are two problems that significantly affect flexible microgrid (MG). This paper proposes a multi-mode operation control strategy for flexible MG based on a three-layer hierarchical structure. The proposed structure is composed of autonomous, cooperative, and scheduling controllers. Autonomous controller is utilized to control the performance of the single micro-source inverter. An adaptive sliding-mode direct voltage loop and an improved droop power loop based on virtual negative impedance are presented respectively to enhance the system disturbance-rejection performance and the power sharing accuracy. Cooperative controller, which is composed of secondary voltage/frequency control and phase synchronization control, is designed to eliminate the voltage/frequency deviations produced by the autonomous controller and prepare for grid connection. Scheduling controller manages the power flow between the MG and the grid. The MG with the improved hierarchical control scheme can achieve seamless transitions from islanded to grid-connected mode and have a good transient performance. In addition the presented work can also optimize the power quality issues and improve the load power sharing accuracy between parallel VSIs. Finally, the transient performance and effectiveness of the proposed control scheme are evaluated by theoretical analysis and simulation results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
An improved PSO-SVM model for online recognition defects in eddy current testing
NASA Astrophysics Data System (ADS)
Liu, Baoling; Hou, Dibo; Huang, Pingjie; Liu, Banteng; Tang, Huayi; Zhang, Wubo; Chen, Peihua; Zhang, Guangxin
2013-12-01
Accurate and rapid recognition of defects is essential for structural integrity and health monitoring of in-service device using eddy current (EC) non-destructive testing. This paper introduces a novel model-free method that includes three main modules: a signal pre-processing module, a classifier module and an optimisation module. In the signal pre-processing module, a kind of two-stage differential structure is proposed to suppress the lift-off fluctuation that could contaminate the EC signal. In the classifier module, multi-class support vector machine (SVM) based on one-against-one strategy is utilised for its good accuracy. In the optimisation module, the optimal parameters of classifier are obtained by an improved particle swarm optimisation (IPSO) algorithm. The proposed IPSO technique can improve convergence performance of the primary PSO through the following strategies: nonlinear processing of inertia weight, introductions of the black hole and simulated annealing model with extremum disturbance. The good generalisation ability of the IPSO-SVM model has been validated through adding additional specimen into the testing set. Experiments show that the proposed algorithm can achieve higher recognition accuracy and efficiency than other well-known classifiers and the superiorities are more obvious with less training set, which contributes to online application.
Zang, Pengxiao; Gao, Simon S; Hwang, Thomas S; Flaxel, Christina J; Wilson, David J; Morrison, John C; Huang, David; Li, Dengwang; Jia, Yali
2017-03-01
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch's membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm).
Zang, Pengxiao; Gao, Simon S.; Hwang, Thomas S.; Flaxel, Christina J.; Wilson, David J.; Morrison, John C.; Huang, David; Li, Dengwang; Jia, Yali
2017-01-01
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch’s membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm). PMID:28663830
A New Hybrid Viscoelastic Soft Tissue Model based on Meshless Method for Haptic Surgical Simulation
Bao, Yidong; Wu, Dongmei; Yan, Zhiyuan; Du, Zhijiang
2013-01-01
This paper proposes a hybrid soft tissue model that consists of a multilayer structure and many spheres for surgical simulation system based on meshless. To improve accuracy of the model, tension is added to the three-parameter viscoelastic structure that connects the two spheres. By using haptic device, the three-parameter viscoelastic model (TPM) produces accurate deformationand also has better stress-strain, stress relaxation and creep properties. Stress relaxation and creep formulas have been obtained by mathematical formula derivation. Comparing with the experimental results of the real pig liver which were reported by Evren et al. and Amy et al., the curve lines of stress-strain, stress relaxation and creep of TPM are close to the experimental data of the real liver. Simulated results show that TPM has better real-time, stability and accuracy. PMID:24339837
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu
This work investigates the promise of a “bottom-up” extended ensemble framework for developing coarse-grained (CG) models that provide predictive accuracy and transferability for describing both structural and thermodynamic properties. We employ a force-matching variational principle to determine system-independent, i.e., transferable, interaction potentials that optimally model the interactions in five distinct heptane-toluene mixtures. Similarly, we employ a self-consistent pressure-matching approach to determine a system-specific pressure correction for each mixture. The resulting CG potentials accurately reproduce the site-site rdfs, the volume fluctuations, and the pressure equations of state that are determined by all-atom (AA) models for the five mixtures. Furthermore, we demonstratemore » that these CG potentials provide similar accuracy for additional heptane-toluene mixtures that were not included their parameterization. Surprisingly, the extended ensemble approach improves not only the transferability but also the accuracy of the calculated potentials. Additionally, we observe that the required pressure corrections strongly correlate with the intermolecular cohesion of the system-specific CG potentials. Moreover, this cohesion correlates with the relative “structure” within the corresponding mapped AA ensemble. Finally, the appendix demonstrates that the self-consistent pressure-matching approach corresponds to minimizing an appropriate relative entropy.« less
Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor
Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng
2016-01-01
In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194
Determining successional stage of temperate coniferous forests with Landsat satellite data
NASA Technical Reports Server (NTRS)
Fiorella, Maria; Ripple, William J.
1995-01-01
Thematic Mapper (TM) digital imagery was used to map forest successional stages and to evaluate spectral differences between old-growth and mature forests in the central Cascade Range of Oregon. Relative sun incidence values were incorporated into the successional stage classification to compensate for topographic induced variation. Relative sun incidence improved the classification accuracy of young successional stages, but did not improve the classification accuracy of older, closed canopy forest classes or overall accuracy. TM bands 1, 2, and 4; the normalized difference vegetation index (NDVI); and TM 4/3, 4/5, and 4/7 band ratio values for old-growth forests were found to be significantly lower than the values of mature forests (P less than or equal to 0.010). Wetness and the TM 4/5 and 4/7 band ratios all had low correlations to relative sun incidence (r(exp 2) less than or equal to 0.16). The TM 4/5 band ratio was named the 'structural index' (SI) because of its ability to distinguish between mature and old-growth forests and its simplicity.
Optimization of Smart Structure for Improving Servo Performance of Hard Disk Drive
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Takahashi, Masafumi; Arisaka, Toshihiro
Head positioning accuracy of the hard disk drive should be improved to meet today's increasing performance demands. Vibration suppression of the arm in the hard disk drive is very important to enhance the servo bandwidth of the head positioning system. In this study, smart structure technology is introduced into the hard disk drive to suppress the vibration of the head actuator. It has been expected that the smart structure technology will contribute to the development of small and light-weight mechatronics devices with the required performance. First, modeling of the system is conducted with finite element method and modal analysis. Next, the actuator location and the control system are simultaneously optimized using genetic algorithm. Vibration control effect with the proposed vibration control mechanisms has been evaluated by some simulations.
Ultra-precise micro-motion stage for optical scanning test
NASA Astrophysics Data System (ADS)
Chen, Wen; Zhang, Jianhuan; Jiang, Nan
2009-05-01
This study aims at the application of optical sensing technology in a 2D flexible hinge test stage. Optical fiber sensor which is manufactured taking advantage of the various unique properties of optical fiber, such as good electric insulation properties, resistance of electromagnetic disturbance, sparkless property and availability in flammable and explosive environment, has lots of good properties, such as high accuracy and wide dynamic range, repeatable, etc. and is applied in 2D flexible hinge stage driven by PZT. Several micro-bending structures are designed utilizing the characteristics of the flexible hinge stage. And through experiments, the optimal micro-bending tooth structure and the scope of displacement sensor trip under this optimal micro-bending tooth structure are derived. These experiments demonstrate that the application of optical fiber displacement sensor in 2D flexible hinge stage driven by PZT substantially broadens the dynamic testing range and improves the sensitivity of this apparatus. Driving accuracy and positioning stability are enhanced as well. [1,2
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-01-01
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Prediction of β-turns in proteins from multiple alignment using neural network
Kaur, Harpreet; Raghava, Gajendra Pal Singh
2003-01-01
A neural network-based method has been developed for the prediction of β-turns in proteins by using multiple sequence alignment. Two feed-forward back-propagation networks with a single hidden layer are used where the first-sequence structure network is trained with the multiple sequence alignment in the form of PSI-BLAST–generated position-specific scoring matrices. The initial predictions from the first network and PSIPRED-predicted secondary structure are used as input to the second structure-structure network to refine the predictions obtained from the first net. A significant improvement in prediction accuracy has been achieved by using evolutionary information contained in the multiple sequence alignment. The final network yields an overall prediction accuracy of 75.5% when tested by sevenfold cross-validation on a set of 426 nonhomologous protein chains. The corresponding Qpred, Qobs, and Matthews correlation coefficient values are 49.8%, 72.3%, and 0.43, respectively, and are the best among all the previously published β-turn prediction methods. The Web server BetaTPred2 (http://www.imtech.res.in/raghava/betatpred2/) has been developed based on this approach. PMID:12592033
Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Giesy, Daniel P.
1998-01-01
An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.
Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang
2018-02-20
We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.
The application of ab initio calculations to molecular spectroscopy
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.
1989-01-01
The state of the art in ab initio molecular structure calculations is reviewed with an emphasis on recent developments, such as full configuration-interaction benchmark calculations and atomic natural orbital basis sets. It is found that new developments in methodology, combined with improvements in computer hardware, are leading to unprecedented accuracy in solving problems in spectroscopy.
The application of ab initio calculations to molecular spectroscopy
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.
1989-01-01
The state of the art in ab initio molecular structure calculations is reviewed, with an emphasis on recent developments such as full configuration-interaction benchmark calculations and atomic natural orbital basis sets. It is shown that new developments in methodology combined with improvements in computer hardware are leading to unprecedented accuracy in solving problems in spectroscopy.
Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy B. Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen
2011-01-01
Accurate landscape-scale maps of forests and associated disturbances are critical to augment studies on biodiversity, ecosystem services, and the carbon cycle, especially in terms of understanding how the spatial and temporal complexities of damage sustained from disturbances influence forest structure and function. Vegetation change tracker (VCT) is a highly automated...
NASA Technical Reports Server (NTRS)
Knezovich, F. M.
1976-01-01
A modular structured system of computer programs is presented utilizing earth and ocean dynamical data keyed to finitely defined parameters. The model is an assemblage of mathematical algorithms with an inherent capability of maturation with progressive improvements in observational data frequencies, accuracies and scopes. The Eom in its present state is a first-order approach to a geophysical model of the earth's dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Mary, E-mail: maryfeng@umich.ed; Moran, Jean M.; Koelling, Todd
2011-01-01
Purpose: Cardiac toxicity is an important sequela of breast radiotherapy. However, the relationship between dose to cardiac structures and subsequent toxicity has not been well defined, partially due to variations in substructure delineation, which can lead to inconsistent dose reporting and the failure to detect potential correlations. Here we have developed a heart atlas and evaluated its effect on contour accuracy and concordance. Methods and Materials: A detailed cardiac computed tomography scan atlas was developed jointly by cardiology, cardiac radiology, and radiation oncology. Seven radiation oncologists were recruited to delineate the whole heart, left main and left anterior descending interventricularmore » branches, and right coronary arteries on four cases before and after studying the atlas. Contour accuracy was assessed by percent overlap with gold standard atlas volumes. The concordance index was also calculated. Standard radiation fields were applied. Doses to observer-contoured cardiac structures were calculated and compared with gold standard contour doses. Pre- and post-atlas values were analyzed using a paired t test. Results: The cardiac atlas significantly improved contour accuracy and concordance. Percent overlap and concordance index of observer-contoured cardiac and gold standard volumes were 2.3-fold improved for all structures (p < 0.002). After application of the atlas, reported mean doses to the whole heart, left main artery, left anterior descending interventricular branch, and right coronary artery were within 0.1, 0.9, 2.6, and 0.6 Gy, respectively, of gold standard doses. Conclusions: This validated University of Michigan cardiac atlas may serve as a useful tool in future studies assessing cardiac toxicity and in clinical trials which include dose volume constraints to the heart.« less
PREFMD: a web server for protein structure refinement via molecular dynamics simulations.
Heo, Lim; Feig, Michael
2018-03-15
Refinement of protein structure models is a long-standing problem in structural bioinformatics. Molecular dynamics-based methods have emerged as an avenue to achieve consistent refinement. The PREFMD web server implements an optimized protocol based on the method successfully tested in CASP11. Validation with recent CASP refinement targets shows consistent and more significant improvement in global structure accuracy over other state-of-the-art servers. PREFMD is freely available as a web server at http://feiglab.org/prefmd. Scripts for running PREFMD as a stand-alone package are available at https://github.com/feiglab/prefmd.git. feig@msu.edu. Supplementary data are available at Bioinformatics online.
Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones
Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon
2016-01-01
The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284
Improving clinical models based on knowledge extracted from current datasets: a new approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Morais, J
2016-08-01
The Cardiovascular Diseases (CVD) are the leading cause of death in the world, being prevention recognized to be a key intervention able to contradict this reality. In this context, although there are several models and scores currently used in clinical practice to assess the risk of a new cardiovascular event, they present some limitations. The goal of this paper is to improve the CVD risk prediction taking into account the current models as well as information extracted from real and recent datasets. This approach is based on a decision tree scheme in order to assure the clinical interpretability of the model. An innovative optimization strategy is developed in order to adjust the decision tree thresholds (rule structure is fixed) based on recent clinical datasets. A real dataset collected in the ambit of the National Registry on Acute Coronary Syndromes, Portuguese Society of Cardiology is applied to validate this work. In order to assess the performance of the new approach, the metrics sensitivity, specificity and accuracy are used. This new approach achieves sensitivity, a specificity and an accuracy values of, 80.52%, 74.19% and 77.27% respectively, which represents an improvement of about 26% in relation to the accuracy of the original score.
Spatially-Resolved Hydraulic Conductivity Estimation Via Poroelastic Magnetic Resonance Elastography
McGarry, Matthew; Weaver, John B.; Paulsen, Keith D.
2015-01-01
Poroelastic magnetic resonance elastography is an imaging technique that could recover mechanical and hydrodynamical material properties of in vivo tissue. To date, mechanical properties have been estimated while hydrodynamical parameters have been assumed homogeneous with literature-based values. Estimating spatially-varying hydraulic conductivity would likely improve model accuracy and provide new image information related to a tissue’s interstitial fluid compartment. A poroelastic model was reformulated to recover hydraulic conductivity with more appropriate fluid-flow boundary conditions. Simulated and physical experiments were conducted to evaluate the accuracy and stability of the inversion algorithm. Simulations were accurate (property errors were < 2%) even in the presence of Gaussian measurement noise up to 3%. The reformulated model significantly decreased variation in the shear modulus estimate (p≪0.001) and eliminated the homogeneity assumption and the need to assign hydraulic conductivity values from literature. Material property contrast was recovered experimentally in three different tofu phantoms and the accuracy was improved through soft-prior regularization. A frequency-dependence in hydraulic conductivity contrast was observed suggesting that fluid-solid interactions may be more prominent at low frequency. In vivo recovery of both structural and hydrodynamical characteristics of tissue could improve detection and diagnosis of neurological disorders such as hydrocephalus and brain tumors. PMID:24771571
Recent enhancements to the GRIDGEN structured grid generation system
NASA Technical Reports Server (NTRS)
Steinbrenner, John P.; Chawner, John R.
1992-01-01
Significant enhancements are being implemented into the GRIDGEN3D, multiple block, structured grid generation software. Automatic, point-to-point, interblock connectivity will be possible through the addition of the domain entity to GRIDBLOCK's block construction process. Also, the unification of GRIDGEN2D and GRIDBLOCK has begun with the addition of edge grid point distribution capability to GRIDBLOCK. The geometric accuracy of surface grids and the ease with which databases may be obtained is being improved by adding support for standard computer-aided design formats (e.g., PATRAN Neutral and IGES files). Finally, volume grid quality was improved through addition of new SOR algorithm features and the new hybrid control function type to GRIDGEN3D.
NASA Astrophysics Data System (ADS)
Becker, P.; Idelsohn, S. R.; Oñate, E.
2015-06-01
This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.
Enhancements to the SHARP Build System and NEK5000 Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alex; Bennett, Andrew R.; Billings, Jay Jay
The SHARP project for the Department of Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program provides a multiphysics framework for coupled simulations of advanced nuclear reactor designs. It provides an overall coupling environment that utilizes custom interfaces to couple existing physics codes through a common spatial decomposition and unique solution transfer component. As of this writing, SHARP couples neutronics, thermal hydraulics, and structural mechanics using PROTEUS, Nek5000, and Diablo respectively. This report details two primary SHARP improvements regarding the Nek5000 and Diablo individual physics codes: (1) an improved Nek5000 coupling interface that lets SHARP achieve a vast increase inmore » overall solution accuracy by manipulating the structure of the internal Nek5000 spatial mesh, and (2) the capability to seamlessly couple structural mechanics calculations into the framework through improvements to the SHARP build system. The Nek5000 coupling interface now uses a barycentric Lagrange interpolation method that takes the vertex-based power and density computed from the PROTEUS neutronics solver and maps it to the user-specified, general-order Nek5000 spectral element mesh. Before this work, SHARP handled this vertex-based solution transfer in an averaging-based manner. SHARP users can now achieve higher levels of accuracy by specifying any arbitrary Nek5000 spectral mesh order. This improvement takes the average percentage error between the PROTEUS power solution and the Nek5000 interpolated result down drastically from over 23 % to just above 2 %, and maintains the correct power profile. We have integrated Diablo into the SHARP build system to facilitate the future coupling of structural mechanics calculations into SHARP. Previously, simulations involving Diablo were done in an iterative manner, requiring a large amount manual work, and left only as a task for advanced users. This report will detail a new Diablo build system that was implemented using GNU Autotools, mirroring much of the current SHARP build system, and easing the use of structural mechanics calculations for end-users of the SHARP multiphysics framework. It lets users easily build and use Diablo as a stand-alone simulation, as well as fully couple with the other SHARP physics modules. The top-level SHARP build system was modified to allow Diablo to hook in directly. New dependency handlers were implemented to let SHARP users easily build the framework with these new simulation capabilities. The remainder of this report will describe this work in full, with a detailed discussion of the overall design philosophy of SHARP, the new solution interpolation method introduced, and the Diablo integration work. We will conclude with a discussion of possible future SHARP improvements that will serve to increase solution accuracy and framework capability.« less
Huang, Huajun; Xiang, Chunling; Zeng, Canjun; Ouyang, Hanbin; Wong, Kelvin Kian Loong; Huang, Wenhua
2015-12-01
We improved the geometrical modeling procedure for fast and accurate reconstruction of orthopedic structures. This procedure consists of medical image segmentation, three-dimensional geometrical reconstruction, and assignment of material properties. The patient-specific orthopedic structures reconstructed by this improved procedure can be used in the virtual surgical planning, 3D printing of real orthopedic structures and finite element analysis. A conventional modeling consists of: image segmentation, geometrical reconstruction, mesh generation, and assignment of material properties. The present study modified the conventional method to enhance software operating procedures. Patient's CT images of different bones were acquired and subsequently reconstructed to give models. The reconstruction procedures were three-dimensional image segmentation, modification of the edge length and quantity of meshes, and the assignment of material properties according to the intensity of gravy value. We compared the performance of our procedures to the conventional procedures modeling in terms of software operating time, success rate and mesh quality. Our proposed framework has the following improvements in the geometrical modeling: (1) processing time: (femur: 87.16 ± 5.90 %; pelvis: 80.16 ± 7.67 %; thoracic vertebra: 17.81 ± 4.36 %; P < 0.05); (2) least volume reduction (femur: 0.26 ± 0.06 %; pelvis: 0.70 ± 0.47, thoracic vertebra: 3.70 ± 1.75 %; P < 0.01) and (3) mesh quality in terms of aspect ratio (femur: 8.00 ± 7.38 %; pelvis: 17.70 ± 9.82 %; thoracic vertebra: 13.93 ± 9.79 %; P < 0.05) and maximum angle (femur: 4.90 ± 5.28 %; pelvis: 17.20 ± 19.29 %; thoracic vertebra: 3.86 ± 3.82 %; P < 0.05). Our proposed patient-specific geometrical modeling requires less operating time and workload, but the orthopedic structures were generated at a higher rate of success as compared with the conventional method. It is expected to benefit the surgical planning of orthopedic structures with less operating time and high accuracy of modeling.
Optimization design and analysis of the pavement planer scraper structure
NASA Astrophysics Data System (ADS)
Fang, Yuanbin; Sha, Hongwei; Yuan, Dajun; Xie, Xiaobing; Yang, Shibo
2018-03-01
By LS-DYNA, it establishes the finite element model of road milling machine scraper, and analyses the dynamic simulation. Through the optimization of the scraper structure and scraper angle, obtain the optimal structure of milling machine scraper. At the same time, the simulation results are verified. The results show that the scraper structure is improved that cemented carbide is located in the front part of the scraper substrate. Compared with the working resistance before improvement, it tends to be gentle and the peak value is smaller. The cutting front angle and the cutting back angle are optimized. The cutting front angle is 6 degrees and the cutting back angle is 9 degrees. The resultant of forces which contains the working resistance and the impact force is the least. It proves accuracy of the simulation results and provides guidance for further optimization work.
RepeatsDB-lite: a web server for unit annotation of tandem repeat proteins.
Hirsh, Layla; Paladin, Lisanna; Piovesan, Damiano; Tosatto, Silvio C E
2018-05-09
RepeatsDB-lite (http://protein.bio.unipd.it/repeatsdb-lite) is a web server for the prediction of repetitive structural elements and units in tandem repeat (TR) proteins. TRs are a widespread but poorly annotated class of non-globular proteins carrying heterogeneous functions. RepeatsDB-lite extends the prediction to all TR types and strongly improves the performance both in terms of computational time and accuracy over previous methods, with precision above 95% for solenoid structures. The algorithm exploits an improved TR unit library derived from the RepeatsDB database to perform an iterative structural search and assignment. The web interface provides tools for analyzing the evolutionary relationships between units and manually refine the prediction by changing unit positions and protein classification. An all-against-all structure-based sequence similarity matrix is calculated and visualized in real-time for every user edit. Reviewed predictions can be submitted to RepeatsDB for review and inclusion.
Enhanced Impact Resistance of Three-Dimensional-Printed Parts with Structured Filaments.
Peng, Fang; Zhao, Zhiyang; Xia, Xuhui; Cakmak, Miko; Vogt, Bryan D
2018-05-09
Net-shape manufacture of customizable objects through three-dimensional (3D) printing offers tremendous promise for personalization to improve the fit, performance, and comfort associated with devices and tools used in our daily lives. However, the application of 3D printing in structural objects has been limited by their poor mechanical performance that manifests from the layer-by-layer process by which the part is produced. Here, this interfacial weakness is overcome using a structured, core-shell polymer filament where a polycarbonate (PC) core solidifies quickly to define the shape, whereas an olefin ionomer shell contains functionality (crystallinity and ionic) that strengthen the interface between the printed layers. This structured filament leads to improved dimensional accuracy and impact resistance in comparison to the individual components. The impact resistance from structured filaments containing 45 vol % shell can exceed 800 J/m. The origins of this improved impact resistance are probed using X-ray microcomputed tomography. Energy is dissipated by delamination of the shell from PC near the crack tip, whereas PC remains intact to provide stability to the part after impact. This structured filament provides tremendous improvements in the critical properties for manufacture and represents a major leap forward in the impact properties obtainable for 3D-printed parts.
2014-01-01
Background Modern radiation oncology demands a thorough understanding of gross and cross-sectional anatomy for diagnostic and therapeutic applications. Complex anatomic sites present challenges for learners and are not well-addressed in traditional postgraduate curricula. A multidisciplinary team (MDT) based head-and-neck gross and radiologic anatomy program for radiation oncology trainees was developed, piloted, and empirically assessed for efficacy and learning outcomes. Methods Four site-specific MDT head-and-neck seminars were implemented, each involving a MDT delivering didactic and case-based instruction, supplemented by cadaveric presentations. There was no dedicated contouring instruction. Pre- and post-testing were performed to assess knowledge, and ability to apply knowledge to the clinical setting as defined by accuracy of contouring. Paired analyses of knowledge pretests and posttests were performed by Wilcoxon matched-pair signed-rank test. Results Fifteen post-graduate trainees participated. A statistically significant (p < 0.001) mean absolute improvement of 4.6 points (17.03%) was observed between knowledge pretest and posttest scores. Contouring accuracy was analyzed quantitatively by comparing spatial overlap of participants’ pretest and posttest contours with a gold standard through the dice similarity coefficient. A statistically significant improvement in contouring accuracy was observed for 3 out of 20 anatomical structures. Qualitative and quantitative feedback revealed that participants were more confident at contouring and were enthusiastic towards the seminars. Conclusions MDT seminars were associated with improved knowledge scores and resident satisfaction; however, increased gross and cross-sectional anatomic knowledge did not translate into improvements in contouring accuracy. Further research should evaluate the impact of hands-on contouring sessions in addition to dedicated instructional sessions to develop competencies. PMID:24969509
D'Souza, Leah; Jaswal, Jasbir; Chan, Francis; Johnson, Marjorie; Tay, Keng Yeow; Fung, Kevin; Palma, David
2014-06-26
Modern radiation oncology demands a thorough understanding of gross and cross-sectional anatomy for diagnostic and therapeutic applications. Complex anatomic sites present challenges for learners and are not well-addressed in traditional postgraduate curricula. A multidisciplinary team (MDT) based head-and-neck gross and radiologic anatomy program for radiation oncology trainees was developed, piloted, and empirically assessed for efficacy and learning outcomes. Four site-specific MDT head-and-neck seminars were implemented, each involving a MDT delivering didactic and case-based instruction, supplemented by cadaveric presentations. There was no dedicated contouring instruction. Pre- and post-testing were performed to assess knowledge, and ability to apply knowledge to the clinical setting as defined by accuracy of contouring. Paired analyses of knowledge pretests and posttests were performed by Wilcoxon matched-pair signed-rank test. Fifteen post-graduate trainees participated. A statistically significant (p < 0.001) mean absolute improvement of 4.6 points (17.03%) was observed between knowledge pretest and posttest scores. Contouring accuracy was analyzed quantitatively by comparing spatial overlap of participants' pretest and posttest contours with a gold standard through the dice similarity coefficient. A statistically significant improvement in contouring accuracy was observed for 3 out of 20 anatomical structures. Qualitative and quantitative feedback revealed that participants were more confident at contouring and were enthusiastic towards the seminars. MDT seminars were associated with improved knowledge scores and resident satisfaction; however, increased gross and cross-sectional anatomic knowledge did not translate into improvements in contouring accuracy. Further research should evaluate the impact of hands-on contouring sessions in addition to dedicated instructional sessions to develop competencies.
SAbPred: a structure-based antibody prediction server
Dunbar, James; Krawczyk, Konrad; Leem, Jinwoo; Marks, Claire; Nowak, Jaroslaw; Regep, Cristian; Georges, Guy; Kelm, Sebastian; Popovic, Bojana; Deane, Charlotte M.
2016-01-01
SAbPred is a server that makes predictions of the properties of antibodies focusing on their structures. Antibody informatics tools can help improve our understanding of immune responses to disease and aid in the design and engineering of therapeutic molecules. SAbPred is a single platform containing multiple applications which can: number and align sequences; automatically generate antibody variable fragment homology models; annotate such models with estimated accuracy alongside sequence and structural properties including potential developability issues; predict paratope residues; and predict epitope patches on protein antigens. The server is available at http://opig.stats.ox.ac.uk/webapps/sabpred. PMID:27131379
Precision measurement of the three 2(3)P(J) helium fine structure intervals.
Zelevinsky, T; Farkas, D; Gabrielse, G
2005-11-11
The three 2(3)P fine structure intervals of 4H are measured at an improved accuracy that is sufficient to test two-electron QED theory and to determine the fine structure constant alpha to 14 parts in 10(9). The more accurate determination of alpha, to a precision higher than attained with the quantum Hall and Josephson effects, awaits the reconciliation of two inconsistent theoretical calculations now being compared term by term. A low pressure helium discharge presents experimental uncertainties quite different than for earlier measurements and allows direct measurements of light pressure shifts.
Entropy-based link prediction in weighted networks
NASA Astrophysics Data System (ADS)
Xu, Zhongqi; Pu, Cunlai; Ramiz Sharafat, Rajput; Li, Lunbo; Yang, Jian
2017-01-01
Information entropy has been proved to be an effective tool to quantify the structural importance of complex networks. In the previous work (Xu et al, 2016 \\cite{xu2016}), we measure the contribution of a path in link prediction with information entropy. In this paper, we further quantify the contribution of a path with both path entropy and path weight, and propose a weighted prediction index based on the contributions of paths, namely Weighted Path Entropy (WPE), to improve the prediction accuracy in weighted networks. Empirical experiments on six weighted real-world networks show that WPE achieves higher prediction accuracy than three typical weighted indices.
Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method
NASA Astrophysics Data System (ADS)
Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu
2017-10-01
Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.
Technical editing of research reports in biomedical journals.
Wager, Elizabeth; Middleton, Philippa
2008-10-08
Most journals try to improve their articles by technical editing processes such as proof-reading, editing to conform to 'house styles', grammatical conventions and checking accuracy of cited references. Despite the considerable resources devoted to technical editing, we do not know whether it improves the accessibility of biomedical research findings or the utility of articles. This is an update of a Cochrane methodology review first published in 2003. To assess the effects of technical editing on research reports in peer-reviewed biomedical journals, and to assess the level of accuracy of references to these reports. We searched The Cochrane Library Issue 2, 2007; MEDLINE (last searched July 2006); EMBASE (last searched June 2007) and checked relevant articles for further references. We also searched the Internet and contacted researchers and experts in the field. Prospective or retrospective comparative studies of technical editing processes applied to original research articles in biomedical journals, as well as studies of reference accuracy. Two review authors independently assessed each study against the selection criteria and assessed the methodological quality of each study. One review author extracted the data, and the second review author repeated this. We located 32 studies addressing technical editing and 66 surveys of reference accuracy. Only three of the studies were randomised controlled trials. A 'package' of largely unspecified editorial processes applied between acceptance and publication was associated with improved readability in two studies and improved reporting quality in another two studies, while another study showed mixed results after stricter editorial policies were introduced. More intensive editorial processes were associated with fewer errors in abstracts and references. Providing instructions to authors was associated with improved reporting of ethics requirements in one study and fewer errors in references in two studies, but no difference was seen in the quality of abstracts in one randomised controlled trial. Structuring generally improved the quality of abstracts, but increased their length. The reference accuracy studies showed a median citation error rate of 38% and a median quotation error rate of 20%. Surprisingly few studies have evaluated the effects of technical editing rigorously. However there is some evidence that the 'package' of technical editing used by biomedical journals does improve papers. A substantial number of references in biomedical articles are cited or quoted inaccurately.
Yang, Jing; Jin, Qi-Yu; Zhang, Biao; Shen, Hong-Bin
2016-08-15
Inter-residue contacts in proteins dictate the topology of protein structures. They are crucial for protein folding and structural stability. Accurate prediction of residue contacts especially for long-range contacts is important to the quality of ab inito structure modeling since they can enforce strong restraints to structure assembly. In this paper, we present a new Residue-Residue Contact predictor called R2C that combines machine learning-based and correlated mutation analysis-based methods, together with a two-dimensional Gaussian noise filter to enhance the long-range residue contact prediction. Our results show that the outputs from the machine learning-based method are concentrated with better performance on short-range contacts; while for correlated mutation analysis-based approach, the predictions are widespread with higher accuracy on long-range contacts. An effective query-driven dynamic fusion strategy proposed here takes full advantages of the two different methods, resulting in an impressive overall accuracy improvement. We also show that the contact map directly from the prediction model contains the interesting Gaussian noise, which has not been discovered before. Different from recent studies that tried to further enhance the quality of contact map by removing its transitive noise, we designed a new two-dimensional Gaussian noise filter, which was especially helpful for reinforcing the long-range residue contact prediction. Tested on recent CASP10/11 datasets, the overall top L/5 accuracy of our final R2C predictor is 17.6%/15.5% higher than the pure machine learning-based method and 7.8%/8.3% higher than the correlated mutation analysis-based approach for the long-range residue contact prediction. http://www.csbio.sjtu.edu.cn/bioinf/R2C/Contact:hbshen@sjtu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Slicer Method Comparison Using Open-source 3D Printer
NASA Astrophysics Data System (ADS)
Ariffin, M. K. A. Mohd; Sukindar, N. A.; Baharudin, B. T. H. T.; Jaafar, C. N. A.; Ismail, M. I. S.
2018-01-01
Open-source 3D printer has been one of the popular choices in fabricating 3D models. This technology is easily accessible and low in cost. However, several studies have been made to improve the performance of this low-cost technology in term of the accuracy of the parts finish. This study is focusing on the selection of slicer mode between CuraEngine and Slic3r. The effect on this slicer has been observe in terms of accuracy and surface visualization. The result shows that if the accuracy is the top priority, CuraEngine is the better option to use as contribute more accuracy as well as less filament is needed compared to the Slice3r. Slice3r may be very useful for complicated parts such as hanging structure due to excessive material which act as support material. The study provides basic platform for the user to have an idea which option to be used in fabricating 3D model.
Perceptual experience and posttest improvements in perceptual accuracy and consistency.
Wagman, Jeffrey B; McBride, Dawn M; Trefzger, Amanda J
2008-08-01
Two experiments investigated the relationship between perceptual experience (during practice) and posttest improvements in perceptual accuracy and consistency. Experiment 1 investigated the potential relationship between how often knowledge of results (KR) is provided during a practice session and posttest improvements in perceptual accuracy. Experiment 2 investigated the potential relationship between how often practice (PR) is provided during a practice session and posttest improvements in perceptual consistency. The results of both experiments are consistent with previous findings that perceptual accuracy improves only when practice includes KR and that perceptual consistency improves regardless of whether practice includes KR. In addition, the results showed that although there is a relationship between how often KR is provided during a practice session and posttest improvements in perceptual accuracy, there is no relationship between how often PR is provided during a practice session and posttest improvements in consistency.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy
Sommer, Marius; Rönnqvist, Louise
2009-01-01
This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608
Accuracy Assessment of Coastal Topography Derived from Uav Images
NASA Astrophysics Data System (ADS)
Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.
2016-06-01
To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.
Xie, Yaoqin; Chao, Ming; Xing, Lei
2009-01-01
Purpose To report a tissue feature-based image registration strategy with explicit inclusion of the differential motions of thoracic structures. Methods and Materials The proposed technique started with auto-identification of a number of corresponding points with distinct tissue features. The tissue feature points were found by using the scale-invariant feature transform (SIFT) method. The control point pairs were then sorted into different “colors” according to the organs they reside and used to model the involved organs individually. A thin-plate spline (TPS) method was used to register a structure characterized by the control points with a given “color”. The proposed technique was applied to study a digital phantom case, three lung and three liver cancer patients. Results For the phantom case, a comparison with the conventional TPS method showed that the registration accuracy was markedly improved when the differential motions of the lung and chest wall were taken into account. On average, the registration error and the standard deviation (SD) of the 15 points against the known ground truth are reduced from 3.0 mm to 0.5 mm and from 1.5 mm to 0.2 mm, respectively, when the new method was used. Similar level of improvement was achieved for the clinical cases. Conclusions The segmented deformable approach provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration. PMID:19545792
Super-resolution mapping using multi-viewing CHRIS/PROBA data
NASA Astrophysics Data System (ADS)
Dwivedi, Manish; Kumar, Vinay
2016-04-01
High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.
Performance of protein-structure predictions with the physics-based UNRES force field in CASP11.
Krupa, Paweł; Mozolewska, Magdalena A; Wiśniewska, Marta; Yin, Yanping; He, Yi; Sieradzan, Adam K; Ganzynkowicz, Robert; Lipska, Agnieszka G; Karczyńska, Agnieszka; Ślusarz, Magdalena; Ślusarz, Rafał; Giełdoń, Artur; Czaplewski, Cezary; Jagieła, Dawid; Zaborowski, Bartłomiej; Scheraga, Harold A; Liwo, Adam
2016-11-01
Participating as the Cornell-Gdansk group, we have used our physics-based coarse-grained UNited RESidue (UNRES) force field to predict protein structure in the 11th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP11). Our methodology involved extensive multiplexed replica exchange simulations of the target proteins with a recently improved UNRES force field to provide better reproductions of the local structures of polypeptide chains. All simulations were started from fully extended polypeptide chains, and no external information was included in the simulation process except for weak restraints on secondary structure to enable us to finish each prediction within the allowed 3-week time window. Because of simplified UNRES representation of polypeptide chains, use of enhanced sampling methods, code optimization and parallelization and sufficient computational resources, we were able to treat, for the first time, all 55 human prediction targets with sizes from 44 to 595 amino acid residues, the average size being 251 residues. Complete structures of six single-domain proteins were predicted accurately, with the highest accuracy being attained for the T0769, for which the CαRMSD was 3.8 Å for 97 residues of the experimental structure. Correct structures were also predicted for 13 domains of multi-domain proteins with accuracy comparable to that of the best template-based modeling methods. With further improvements of the UNRES force field that are now underway, our physics-based coarse-grained approach to protein-structure prediction will eventually reach global prediction capacity and, consequently, reliability in simulating protein structure and dynamics that are important in biochemical processes. Freely available on the web at http://www.unres.pl/ CONTACT: has5@cornell.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mwangi, Benson; Ebmeier, Klaus P; Matthews, Keith; Steele, J Douglas
2012-05-01
Quantitative abnormalities of brain structure in patients with major depressive disorder have been reported at a group level for decades. However, these structural differences appear subtle in comparison with conventional radiologically defined abnormalities, with considerable inter-subject variability. Consequently, it has not been possible to readily identify scans from patients with major depressive disorder at an individual level. Recently, machine learning techniques such as relevance vector machines and support vector machines have been applied to predictive classification of individual scans with variable success. Here we describe a novel hybrid method, which combines machine learning with feature selection and characterization, with the latter aimed at maximizing the accuracy of machine learning prediction. The method was tested using a multi-centre dataset of T(1)-weighted 'structural' scans. A total of 62 patients with major depressive disorder and matched controls were recruited from referred secondary care clinical populations in Aberdeen and Edinburgh, UK. The generalization ability and predictive accuracy of the classifiers was tested using data left out of the training process. High prediction accuracy was achieved (~90%). While feature selection was important for maximizing high predictive accuracy with machine learning, feature characterization contributed only a modest improvement to relevance vector machine-based prediction (~5%). Notably, while the only information provided for training the classifiers was T(1)-weighted scans plus a categorical label (major depressive disorder versus controls), both relevance vector machine and support vector machine 'weighting factors' (used for making predictions) correlated strongly with subjective ratings of illness severity. These results indicate that machine learning techniques have the potential to inform clinical practice and research, as they can make accurate predictions about brain scan data from individual subjects. Furthermore, machine learning weighting factors may reflect an objective biomarker of major depressive disorder illness severity, based on abnormalities of brain structure.
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Adaptive 3D single-block grids for the computation of viscous flows around wings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmeijer, R.; Kok, J.C.
1996-12-31
A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.
Using time series structural characteristics to analyze grain prices in food insecure countries
Davenport, Frank; Funk, Chris
2015-01-01
Two components of food security monitoring are accurate forecasts of local grain prices and the ability to identify unusual price behavior. We evaluated a method that can both facilitate forecasts of cross-country grain price data and identify dissimilarities in price behavior across multiple markets. This method, characteristic based clustering (CBC), identifies similarities in multiple time series based on structural characteristics in the data. Here, we conducted a simulation experiment to determine if CBC can be used to improve the accuracy of maize price forecasts. We then compared forecast accuracies among clustered and non-clustered price series over a rolling time horizon. We found that the accuracy of forecasts on clusters of time series were equal to or worse than forecasts based on individual time series. However, in the following experiment we found that CBC was still useful for price analysis. We used the clusters to explore the similarity of price behavior among Kenyan maize markets. We found that price behavior in the isolated markets of Mandera and Marsabit has become increasingly dissimilar from markets in other Kenyan cities, and that these dissimilarities could not be explained solely by geographic distance. The structural isolation of Mandera and Marsabit that we find in this paper is supported by field studies on food security and market integration in Kenya. Our results suggest that a market with a unique price series (as measured by structural characteristics that differ from neighboring markets) may lack market integration and food security.
NASA Astrophysics Data System (ADS)
Su, Chin-Kuo; Sung, Yu-Chi; Chang, Shuenn-Yih; Huang, Chao-Hsun
2007-09-01
Strong near-fault ground motion, usually caused by the fault-rupture and characterized by a pulse-like velocity-wave form, often causes dramatic instantaneous seismic energy (Jadhav and Jangid 2006). Some reinforced concrete (RC) bridge columns, even those built according to ductile design principles, were damaged in the 1999 Chi-Chi earthquake. Thus, it is very important to evaluate the seismic response of a RC bridge column to improve its seismic design and prevent future damage. Nonlinear time history analysis using step-by-step integration is capable of tracing the dynamic response of a structure during the entire vibration period and is able to accommodate the pulsing wave form. However, the accuracy of the numerical results is very sensitive to the modeling of the nonlinear load-deformation relationship of the structural member. FEMA 273 and ATC-40 provide the modeling parameters for structural nonlinear analyses of RC beams and RC columns. They use three parameters to define the plastic rotation angles and a residual strength ratio to describe the nonlinear load-deformation relationship of an RC member. Structural nonlinear analyses are performed based on these parameters. This method provides a convenient way to obtain the nonlinear seismic responses of RC structures. However, the accuracy of the numerical solutions might be further improved. For this purpose, results from a previous study on modeling of the static pushover analyses for RC bridge columns (Sung et al. 2005) is adopted for the nonlinear time history analysis presented herein to evaluate the structural responses excited by a near-fault ground motion. To ensure the reliability of this approach, the numerical results were compared to experimental results. The results confirm that the proposed approach is valid.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V
2012-07-01
The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.
Mori, S
2014-05-01
To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
Wu, Guosheng; Robertson, Daniel H; Brooks, Charles L; Vieth, Michal
2003-10-01
The influence of various factors on the accuracy of protein-ligand docking is examined. The factors investigated include the role of a grid representation of protein-ligand interactions, the initial ligand conformation and orientation, the sampling rate of the energy hyper-surface, and the final minimization. A representative docking method is used to study these factors, namely, CDOCKER, a molecular dynamics (MD) simulated-annealing-based algorithm. A major emphasis in these studies is to compare the relative performance and accuracy of various grid-based approximations to explicit all-atom force field calculations. In these docking studies, the protein is kept rigid while the ligands are treated as fully flexible and a final minimization step is used to refine the docked poses. A docking success rate of 74% is observed when an explicit all-atom representation of the protein (full force field) is used, while a lower accuracy of 66-76% is observed for grid-based methods. All docking experiments considered a 41-member protein-ligand validation set. A significant improvement in accuracy (76 vs. 66%) for the grid-based docking is achieved if the explicit all-atom force field is used in a final minimization step to refine the docking poses. Statistical analysis shows that even lower-accuracy grid-based energy representations can be effectively used when followed with full force field minimization. The results of these grid-based protocols are statistically indistinguishable from the detailed atomic dockings and provide up to a sixfold reduction in computation time. For the test case examined here, improving the docking accuracy did not necessarily enhance the ability to estimate binding affinities using the docked structures. Copyright 2003 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Hua; Noel, Camille; Chen, Haijian
Purpose: Severe artifacts in kilovoltage-CT simulation images caused by large metallic implants can significantly degrade the conspicuity and apparent CT Hounsfield number of targets and anatomic structures, jeopardize the confidence of anatomical segmentation, and introduce inaccuracies into the radiation therapy treatment planning process. This study evaluated the performance of the first commercial orthopedic metal artifact reduction function (O-MAR) for radiation therapy, and investigated its clinical applications in treatment planning. Methods: Both phantom and clinical data were used for the evaluation. The CIRS electron density phantom with known physical (and electron) density plugs and removable titanium implants was scanned on amore » Philips Brilliance Big Bore 16-slice CT simulator. The CT Hounsfield numbers of density plugs on both uncorrected and O-MAR corrected images were compared. Treatment planning accuracy was evaluated by comparing simulated dose distributions computed using the true density images, uncorrected images, and O-MAR corrected images. Ten CT image sets of patients with large hip implants were processed with the O-MAR function and evaluated by two radiation oncologists using a five-point score for overall image quality, anatomical conspicuity, and CT Hounsfield number accuracy. By utilizing the same structure contours delineated from the O-MAR corrected images, clinical IMRT treatment plans for five patients were computed on the uncorrected and O-MAR corrected images, respectively, and compared. Results: Results of the phantom study indicated that CT Hounsfield number accuracy and noise were improved on the O-MAR corrected images, especially for images with bilateral metal implants. The {gamma} pass rates of the simulated dose distributions computed on the uncorrected and O-MAR corrected images referenced to those of the true densities were higher than 99.9% (even when using 1% and 3 mm distance-to-agreement criterion), suggesting that dose distributions were clinically identical. In all patient cases, radiation oncologists rated O-MAR corrected images as higher quality. Formerly obscured critical structures were able to be visualized. The overall image quality and the conspicuity in critical organs were significantly improved compared with the uncorrected images: overall quality score (1.35 vs 3.25, P= 0.0022); bladder (2.15 vs 3.7, P= 0.0023); prostate and seminal vesicles/vagina (1.3 vs 3.275, P= 0.0020); rectum (2.8 vs 3.9, P= 0.0021). The noise levels of the selected ROIs were reduced from 93.7 to 38.2 HU. On most cases (8/10), the average CT Hounsfield numbers of the prostate/vagina on the O-MAR corrected images were closer to the referenced value (41.2 HU, an average measured from patients without metal implants) than those on the uncorrected images. High {gamma} pass rates of the five IMRT dose distribution pairs indicated that the dose distributions were not significantly affected by the CT image improvements. Conclusions: Overall, this study indicated that the O-MAR function can remarkably reduce metal artifacts and improve both CT Hounsfield number accuracy and target and critical structure visualization. Although there was no significant impact of the O-MAR algorithm on the calculated dose distributions, we suggest that O-MAR corrected images are more suitable for the entire treatment planning process by offering better anatomical structure visualization, improving radiation oncologists' confidence in target delineation, and by avoiding subjective density overrides of artifact regions on uncorrected images.« less
Li, Hua; Noel, Camille; Chen, Haijian; Harold Li, H.; Low, Daniel; Moore, Kevin; Klahr, Paul; Michalski, Jeff; Gay, Hiram A.; Thorstad, Wade; Mutic, Sasa
2012-01-01
Purpose: Severe artifacts in kilovoltage-CT simulation images caused by large metallic implants can significantly degrade the conspicuity and apparent CT Hounsfield number of targets and anatomic structures, jeopardize the confidence of anatomical segmentation, and introduce inaccuracies into the radiation therapy treatment planning process. This study evaluated the performance of the first commercial orthopedic metal artifact reduction function (O-MAR) for radiation therapy, and investigated its clinical applications in treatment planning. Methods: Both phantom and clinical data were used for the evaluation. The CIRS electron density phantom with known physical (and electron) density plugs and removable titanium implants was scanned on a Philips Brilliance Big Bore 16-slice CT simulator. The CT Hounsfield numbers of density plugs on both uncorrected and O-MAR corrected images were compared. Treatment planning accuracy was evaluated by comparing simulated dose distributions computed using the true density images, uncorrected images, and O-MAR corrected images. Ten CT image sets of patients with large hip implants were processed with the O-MAR function and evaluated by two radiation oncologists using a five-point score for overall image quality, anatomical conspicuity, and CT Hounsfield number accuracy. By utilizing the same structure contours delineated from the O-MAR corrected images, clinical IMRT treatment plans for five patients were computed on the uncorrected and O-MAR corrected images, respectively, and compared. Results: Results of the phantom study indicated that CT Hounsfield number accuracy and noise were improved on the O-MAR corrected images, especially for images with bilateral metal implants. The γ pass rates of the simulated dose distributions computed on the uncorrected and O-MAR corrected images referenced to those of the true densities were higher than 99.9% (even when using 1% and 3 mm distance-to-agreement criterion), suggesting that dose distributions were clinically identical. In all patient cases, radiation oncologists rated O-MAR corrected images as higher quality. Formerly obscured critical structures were able to be visualized. The overall image quality and the conspicuity in critical organs were significantly improved compared with the uncorrected images: overall quality score (1.35 vs 3.25, P = 0.0022); bladder (2.15 vs 3.7, P = 0.0023); prostate and seminal vesicles/vagina (1.3 vs 3.275, P = 0.0020); rectum (2.8 vs 3.9, P = 0.0021). The noise levels of the selected ROIs were reduced from 93.7 to 38.2 HU. On most cases (8/10), the average CT Hounsfield numbers of the prostate/vagina on the O-MAR corrected images were closer to the referenced value (41.2 HU, an average measured from patients without metal implants) than those on the uncorrected images. High γ pass rates of the five IMRT dose distribution pairs indicated that the dose distributions were not significantly affected by the CT image improvements. Conclusions: Overall, this study indicated that the O-MAR function can remarkably reduce metal artifacts and improve both CT Hounsfield number accuracy and target and critical structure visualization. Although there was no significant impact of the O-MAR algorithm on the calculated dose distributions, we suggest that O-MAR corrected images are more suitable for the entire treatment planning process by offering better anatomical structure visualization, improving radiation oncologists’ confidence in target delineation, and by avoiding subjective density overrides of artifact regions on uncorrected images. PMID:23231300
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
The Generalized Born solvation model: What is it?
NASA Astrophysics Data System (ADS)
Onufriev, Alexey
2004-03-01
Implicit solvation models provide, for many applications, an effective way of describing the electrostatic effects of aqueous solvation. Here we outline the main approximations behind the popular Generalized Born solvation model. We show how its accuracy, relative to the Poisson-Boltzmann treatment, can be significantly improved in a computationally inexpensive manner to make the model useful in the studies of large-scale conformational transitions at the atomic level. The improved model is tested in a molecular dynamics simulation of folding of a 46-residue (three helix bundle) protein. Starting from an extended structure at 450K, the protein folds to the lowest energy conformation within 6 ns of simulation time, and the predicted structure differs from the native one by 2.4 A (backbone RMSD).
Automatically identifying health outcome information in MEDLINE records.
Demner-Fushman, Dina; Few, Barbara; Hauser, Susan E; Thoma, George
2006-01-01
Understanding the effect of a given intervention on the patient's health outcome is one of the key elements in providing optimal patient care. This study presents a methodology for automatic identification of outcomes-related information in medical text and evaluates its potential in satisfying clinical information needs related to health care outcomes. An annotation scheme based on an evidence-based medicine model for critical appraisal of evidence was developed and used to annotate 633 MEDLINE citations. Textual, structural, and meta-information features essential to outcome identification were learned from the created collection and used to develop an automatic system. Accuracy of automatic outcome identification was assessed in an intrinsic evaluation and in an extrinsic evaluation, in which ranking of MEDLINE search results obtained using PubMed Clinical Queries relied on identified outcome statements. The accuracy and positive predictive value of outcome identification were calculated. Effectiveness of the outcome-based ranking was measured using mean average precision and precision at rank 10. Automatic outcome identification achieved 88% to 93% accuracy. The positive predictive value of individual sentences identified as outcomes ranged from 30% to 37%. Outcome-based ranking improved retrieval accuracy, tripling mean average precision and achieving 389% improvement in precision at rank 10. Preliminary results in outcome-based document ranking show potential validity of the evidence-based medicine-model approach in timely delivery of information critical to clinical decision support at the point of service.
A systematic review of the PTSD Checklist's diagnostic accuracy studies using QUADAS.
McDonald, Scott D; Brown, Whitney L; Benesek, John P; Calhoun, Patrick S
2015-09-01
Despite the popularity of the PTSD Checklist (PCL) as a clinical screening test, there has been no comprehensive quality review of studies evaluating its diagnostic accuracy. A systematic quality assessment of 22 diagnostic accuracy studies of the English-language PCL using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) assessment tool was conducted to examine (a) the quality of diagnostic accuracy studies of the PCL, and (b) whether quality has improved since the 2003 STAndards for the Reporting of Diagnostic accuracy studies (STARD) initiative regarding reporting guidelines for diagnostic accuracy studies. Three raters independently applied the QUADAS tool to each study, and a consensus among the 4 authors is reported. Findings indicated that although studies generally met standards in several quality areas, there is still room for improvement. Areas for improvement include establishing representativeness, adequately describing clinical and demographic characteristics of the sample, and presenting better descriptions of important aspects of test and reference standard execution. Only 2 studies met each of the 14 quality criteria. In addition, study quality has not appreciably improved since the publication of the STARD Statement in 2003. Recommendations for the improvement of diagnostic accuracy studies of the PCL are discussed. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
2015-08-10
representative of the main barrel of a tank or structural health monitoring, for example. We have been working on determining the proper shape of the sensor...needed to be addressed, namely cantilever beam vibrations that were representative of the main barrel of a tank or structural health monitoring, for...MWCNT was made using a frit compression technique; the morphological characterization of the PANI/MWCNT film; its electrical resistance as a
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1989-01-01
The internal structure is discussed of the MHOST finite element program designed for 3-D inelastic analysis of gas turbine hot section components. The computer code is the first implementation of the mixed iterative solution strategy for improved efficiency and accuracy over the conventional finite element method. The control structure of the program is covered along with the data storage scheme and the memory allocation procedure and the file handling facilities including the read and/or write sequences.
g Factor of Light Ions for an Improved Determination of the Fine-Structure Constant.
Yerokhin, V A; Berseneva, E; Harman, Z; Tupitsyn, I I; Keitel, C H
2016-03-11
A weighted difference of the g factors of the H- and Li-like ions of the same element is theoretically studied and optimized in order to maximize the cancellation of nuclear effects between the two charge states. We show that this weighted difference and its combination for two different elements can be used to extract a value for the fine-structure constant from near-future bound-electron g factor experiments with an accuracy competitive with or better than the present literature value.
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures
Theobald, Douglas L.; Wuttke, Deborah S.
2008-01-01
Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907
SARA-Coffee web server, a tool for the computation of RNA sequence and structure multiple alignments
Di Tommaso, Paolo; Bussotti, Giovanni; Kemena, Carsten; Capriotti, Emidio; Chatzou, Maria; Prieto, Pablo; Notredame, Cedric
2014-01-01
This article introduces the SARA-Coffee web server; a service allowing the online computation of 3D structure based multiple RNA sequence alignments. The server makes it possible to combine sequences with and without known 3D structures. Given a set of sequences SARA-Coffee outputs a multiple sequence alignment along with a reliability index for every sequence, column and aligned residue. SARA-Coffee combines SARA, a pairwise structural RNA aligner with the R-Coffee multiple RNA aligner in a way that has been shown to improve alignment accuracy over most sequence aligners when enough structural data is available. The server can be accessed from http://tcoffee.crg.cat/apps/tcoffee/do:saracoffee. PMID:24972831
NASA Astrophysics Data System (ADS)
Sycheva, Elena A.; Vasilev, Aleksandr S.; Lashmanov, Oleg U.; Korotaev, Valery V.
2017-06-01
The article is devoted to the optimization of optoelectronic systems of the spatial position of objects. Probabilistic characteristics of the detection of an active structured mark on a random noisy background are investigated. The developed computer model and the results of the study allow us to estimate the probabilistic characteristics of detection of a complex structured mark on a random gradient background, and estimate the error of spatial coordinates. The results of the study make it possible to improve the accuracy of measuring the coordinates of the object. Based on the research recommendations are given on the choice of parameters of the optimal mark structure for use in opticalelectronic systems for monitoring the spatial position of large-sized structures.
NASA Astrophysics Data System (ADS)
Liu, H.; Dong, H.; Liu, Z.; Ge, J.; Bai, B.; Zhang, C.
2017-10-01
The proton precession magnetometer with single sensor is commonly used in geomagnetic observation and magnetic anomaly detection. Due to technological limitations, the measurement accuracy is restricted by several factors such as the sensor performance, frequency measurement precision, instability of polarization module, etc. Aimed to improve the anti-interference ability, an Overhauser magnetic gradiometer with dual sensor structure was designed. An alternative design of a geomagnetic sensor with differential dual-coil structure was presented. A multi-channel frequency measurement algorithm was proposed to increase the measurement accuracy. A silicon oscillator was adopted to resolve the instability of polarization system. This paper briefly discusses the design and development of the gradiometer and compares the data recorded by this instrument with a commonly used commercially Overhauser magnetometer in the world market. The proposed gradiometer records the earth magnetic field in 24 hours with measurement accuracy of ± 0.3 nT and a sampling rate of 3 seconds per sample. The quality of data recorded is excellent and consistent with the commercial instrument. In addition, experiments of ferromagnetic target localization were conducted. This gradiometer shows a strong ability in magnetic anomaly detection and localization. To sum up, it has the advantages of convenient operation, high precision, strong anti-interference, etc., which proves the effectiveness of the dual sensor structure Overhauser magnetic gradiometer.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-11-13
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results.
Sun, Yongliang; Xu, Yubin; Li, Cheng; Ma, Lin
2013-01-01
A Kalman/map filtering (KMF)-aided fast normalized cross correlation (FNCC)-based Wi-Fi fingerprinting location sensing system is proposed in this paper. Compared with conventional neighbor selection algorithms that calculate localization results with received signal strength (RSS) mean samples, the proposed FNCC algorithm makes use of all the on-line RSS samples and reference point RSS variations to achieve higher fingerprinting accuracy. The FNCC computes efficiently while maintaining the same accuracy as the basic normalized cross correlation. Additionally, a KMF is also proposed to process fingerprinting localization results. It employs a new map matching algorithm to nonlinearize the linear location prediction process of Kalman filtering (KF) that takes advantage of spatial proximities of consecutive localization results. With a calibration model integrated into an indoor map, the map matching algorithm corrects unreasonable prediction locations of the KF according to the building interior structure. Thus, more accurate prediction locations are obtained. Using these locations, the KMF considerably improves fingerprinting algorithm performance. Experimental results demonstrate that the FNCC algorithm with reduced computational complexity outperforms other neighbor selection algorithms and the KMF effectively improves location sensing accuracy by using indoor map information and spatial proximities of consecutive localization results. PMID:24233027
Application of Improved APO Algorithm in Vulnerability Assessment and Reconstruction of Microgrid
NASA Astrophysics Data System (ADS)
Xie, Jili; Ma, Hailing
2018-01-01
Artificial Physics Optimization (APO) has good global search ability and can avoid the premature convergence phenomenon in PSO algorithm, which has good stability of fast convergence and robustness. On the basis of APO of the vector model, a reactive power optimization algorithm based on improved APO algorithm is proposed for the static structure and dynamic operation characteristics of microgrid. The simulation test is carried out through the IEEE 30-bus system and the result shows that the algorithm has better efficiency and accuracy compared with other optimization algorithms.
Ramsey, Elijah W.; Nelson, Gene A.; Sapkota, Sijan
1998-01-01
A progressive classification of a marsh and forest system using Landsat Thematic Mapper (TM), color infrared (CIR) photograph, and ERS-1 synthetic aperture radar (SAR) data improved classification accuracy when compared to classification using solely TM reflective band data. The classification resulted in a detailed identification of differences within a nearly monotypic black needlerush marsh. Accuracy percentages of these classes were surprisingly high given the complexities of classification. The detailed classification resulted in a more accurate portrayal of the marsh transgressive sequence than was obtainable with TM data alone. Individual sensor contribution to the improved classification was compared to that using only the six reflective TM bands. Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest. In combination with TM, SAR and the green CIR band each improved overall accuracy by about 3% and 15% respectively. The SAR data improved the TM classification accuracy mostly in the marsh classes. The green CIR data also improved the marsh classification accuracy and accuracies in some water classes. The final combination of all sensor data improved almost all class accuracies from 2% to 70% with an overall improvement of about 20% over TM data alone. Not only was the identification of vegetation types improved, but the spatial detail of the classification approached 10 m in some areas.
The conical scanner evaluation system design
NASA Technical Reports Server (NTRS)
Cumella, K. E.; Bilanow, S.; Kulikov, I. B.
1982-01-01
The software design for the conical scanner evaluation system is presented. The purpose of this system is to support the performance analysis of the LANDSAT-D conical scanners, which are infrared horizon detection attitude sensors designed for improved accuracy. The system consists of six functionally independent subsystems and five interface data bases. The system structure and interfaces of each of the subsystems is described and the content, format, and file structure of each of the data bases is specified. For each subsystem, the functional logic, the control parameters, the baseline structure, and each of the subroutines are described. The subroutine descriptions include a procedure definition and the input and output parameters.
Hyperfine structure investigations for the odd-parity configuration system in atomic holmium
NASA Astrophysics Data System (ADS)
Stefanska, D.; Furmann, B.
2018-02-01
In this work new experimental results of the hyperfine structure (hfs) in the holmium atom are reported, concerning the odd-parity level system. Investigations were performed by the method of laser induced fluorescence in a hollow cathode discharge lamp on 97 spectral lines in the visible part of the spectrum. Hyperfine structure constants: magnetic dipole - A and electric quadrupole - B for 40 levels were determined for the first time; for another 21 levels the hfs constants available in the literature were remeasured. Results for the A constants can be viewed as fully reliable; for B constants further possibilities of improving the accuracy are considered.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Sentiment analysis of feature ranking methods for classification accuracy
NASA Astrophysics Data System (ADS)
Joseph, Shashank; Mugauri, Calvin; Sumathy, S.
2017-11-01
Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.
Vorberg, Susann
2013-01-01
Abstract Biodegradability describes the capacity of substances to be mineralized by free‐living bacteria. It is a crucial property in estimating a compound’s long‐term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. PMID:27485201
NASA Astrophysics Data System (ADS)
Vlasiuk, Maryna; Frascoli, Federico; Sadus, Richard J.
2016-09-01
The thermodynamic, structural, and vapor-liquid equilibrium properties of neon are comprehensively studied using ab initio, empirical, and semi-classical intermolecular potentials and classical Monte Carlo simulations. Path integral Monte Carlo simulations for isochoric heat capacity and structural properties are also reported for two empirical potentials and one ab initio potential. The isobaric and isochoric heat capacities, thermal expansion coefficient, thermal pressure coefficient, isothermal and adiabatic compressibilities, Joule-Thomson coefficient, and the speed of sound are reported and compared with experimental data for the entire range of liquid densities from the triple point to the critical point. Lustig's thermodynamic approach is formally extended for temperature-dependent intermolecular potentials. Quantum effects are incorporated using the Feynman-Hibbs quantum correction, which results in significant improvement in the accuracy of predicted thermodynamic properties. The new Feynman-Hibbs version of the Hellmann-Bich-Vogel potential predicts the isochoric heat capacity to an accuracy of 1.4% over the entire range of liquid densities. It also predicts other thermodynamic properties more accurately than alternative intermolecular potentials.
NASA Astrophysics Data System (ADS)
Dragoni, Daniele; Daff, Thomas D.; Csányi, Gábor; Marzari, Nicola
2018-01-01
We show that the Gaussian Approximation Potential (GAP) machine-learning framework can describe complex magnetic potential energy surfaces, taking ferromagnetic iron as a paradigmatic challenging case. The training database includes total energies, forces, and stresses obtained from density-functional theory in the generalized-gradient approximation, and comprises approximately 150,000 local atomic environments, ranging from pristine and defected bulk configurations to surfaces and generalized stacking faults with different crystallographic orientations. We find the structural, vibrational, and thermodynamic properties of the GAP model to be in excellent agreement with those obtained directly from first-principles electronic-structure calculations. There is good transferability to quantities, such as Peierls energy barriers, which are determined to a large extent by atomic configurations that were not part of the training set. We observe the benefit and the need of using highly converged electronic-structure calculations to sample a target potential energy surface. The end result is a systematically improvable potential that can achieve the same accuracy of density-functional theory calculations, but at a fraction of the computational cost.
NASA Astrophysics Data System (ADS)
Zhang, Guojian; Yu, Chengxin; Ding, Xinhua
2018-01-01
In this study, digital photography is used to monitor the instantaneous deformation of a masonry wall in seismic oscillation. In order to obtain higher measurement accuracy, the image matching-time baseline parallax method (IM-TBPM) is used to correct errors caused by the change of intrinsic and extrinsic parameters of digital cameras. Results show that the average errors of control point C5 are 0.79mm, 0.44mm and 0.96mm in X, Z and comprehensive direction, respectively. The average errors of control point C6 are 0.49mm, 0.44mm and 0.71mm in X, Z and comprehensive direction, respectively. These suggest that IM-TBPM can meet the accuracy requirements of instantaneous deformation monitoring. In seismic oscillation the middle to lower of the masonry wall develops cracks firstly. Then the shear failure occurs on the middle of masonry wall. This study provides technical basis for analyzing the crack development pattern of masonry structure in seismic oscillation and have significant implications for improved construction of masonry structures in earthquake prone areas.
Magnetic resonance imaging based clinical research in Alzheimer's disease.
Fayed, Nicolás; Modrego, Pedro J; Salinas, Gulillermo Rojas; Gazulla, José
2012-01-01
Alzheimer's disease (AD) is the most common cause of dementia in elderly people in western countries. However important goals are unmet in the issue of early diagnosis and the development of new drugs for treatment. Magnetic resonance imaging (MRI) and volumetry of the medial temporal lobe structures are useful tools for diagnosis. Positron emission tomography is one of the most sensitive tests for making an early diagnosis of AD but the cost and limited availability are important caveats for its utilization. The importance of magnetic resonance techniques has increased gradually to the extent that most clinical works based on AD use these techniques as the main aid to diagnosis. However, the accuracy of structural MRI as biomarker of early AD generally reaches an accuracy of 80%, so additional biomarkers should be used to improve predictions. Other structural MRI (diffusion weighted, diffusion-tensor MRI) and functional MRI have also added interesting contribution to the understanding of the pathophysiology of AD. Magnetic resonance spectroscopy has proven useful to monitor progression and response to treatment in AD, as well as a biomarker of early AD in mild cognitive impairment.
Link prediction with node clustering coefficient
NASA Astrophysics Data System (ADS)
Wu, Zhihao; Lin, Youfang; Wang, Jing; Gregory, Steve
2016-06-01
Predicting missing links in incomplete complex networks efficiently and accurately is still a challenging problem. The recently proposed Cannistrai-Alanis-Ravai (CAR) index shows the power of local link/triangle information in improving link-prediction accuracy. Inspired by the idea of employing local link/triangle information, we propose a new similarity index with more local structure information. In our method, local link/triangle structure information can be conveyed by clustering coefficient of common-neighbors directly. The reason why clustering coefficient has good effectiveness in estimating the contribution of a common-neighbor is that it employs links existing between neighbors of a common-neighbor and these links have the same structural position with the candidate link to this common-neighbor. In our experiments, three estimators: precision, AUP and AUC are used to evaluate the accuracy of link prediction algorithms. Experimental results on ten tested networks drawn from various fields show that our new index is more effective in predicting missing links than CAR index, especially for networks with low correlation between number of common-neighbors and number of links between common-neighbors.
O'Donnell, Daniel; Mancera, Mike; Savory, Eric; Christopher, Shawn; Schaffer, Jason; Roumpf, Steve
2015-01-01
Early and accurate identification of ST-elevation myocardial infarction (STEMI) by prehospital providers has been shown to significantly improve door to balloon times and improve patient outcomes. Previous studies have shown that paramedic accuracy in reading 12 lead ECGs can range from 86% to 94%. However, recent studies have demonstrated that accuracy diminishes for the more uncommon STEMI presentations (e.g. lateral). Unlike hospital physicians, paramedics rarely have the ability to review previous ECGs for comparison. Whether or not a prior ECG can improve paramedic accuracy is not known. The availability of prior ECGs improves paramedic accuracy in ECG interpretation. 130 paramedics were given a single clinical scenario. Then they were randomly assigned 12 computerized prehospital ECGs, 6 with and 6 without an accompanying prior ECG. All ECGs were obtained from a local STEMI registry. For each ECG paramedics were asked to determine whether or not there was a STEMI and to rate their confidence in their interpretation. To determine if the old ECGs improved accuracy we used a mixed effects logistic regression model to calculate p-values between the control and intervention. The addition of a previous ECG improved the accuracy of identifying STEMIs from 75.5% to 80.5% (p=0.015). A previous ECG also increased paramedic confidence in their interpretation (p=0.011). The availability of previous ECGs improves paramedic accuracy and enhances their confidence in interpreting STEMIs. Further studies are needed to evaluate this impact in a clinical setting. Copyright © 2015 Elsevier Inc. All rights reserved.
Contribution of SELENE-2 geodetic measurements to constrain the lunar internal structure
NASA Astrophysics Data System (ADS)
Matsumoto, K.; Kikuchi, F.; Yamada, R.; Iwata, T.; Kono, Y.; Tsuruta, S.; Hanada, H.; Goossens, S. J.; Ishihara, Y.; Kamata, S.; Sasaki, S.
2012-12-01
Internal structure and composition of the Moon provide important clue and constraints on theories for how the Moon formed and evolved. The Apollo seismic network has contributed to the internal structure modeling. Efforts have been made to detect the lunar core from the noisy Apollo data (e.g., [1], [2]), but there is scant information about the structure below the deepest moonquakes at about 1000 km depth. On the other hand, there have been geodetic studies to infer the deep structure of the Moon. For example, LLR (Lunar Laser Ranging) data analyses detected a displacement of the lunar pole of rotation, indicating that dissipation is acting on the rotation arising from a fluid core [3]. Bayesian inversion using geodetic data (such as mass, moments of inertia, tidal Love numbers k2 and h2, and quality factor Q) also suggests a fluid core and partial melt in the lower mantle region [4]. Further improvements in determining the second-degree gravity coefficients (which will lead to better estimates of moments of inertia) and the Love number k2 will help us to better constrain the lunar internal structure. Differential VLBI (Very Long Baseline Interferometry) technique, which was used in the Japanese lunar exploration mission SELENE (Sept. 2007 - June 2009), is expected to contribute to better determining the second-degree potential Love number k2 and low-degree gravity coefficients. SELENE will be followed by the future lunar mission SELENE-2 which will carry both a lander and an orbiter. We propose to put the SELENE-type radio sources on these spacecraft in order to accurately estimate k2 and the low-degree gravity coefficients. By using the same-beam VLBI tracking technique, these parameters will be retrieved through precision orbit determination of the orbiter with respect to the lander which serves as a reference. The VLBI mission with the radio sources is currently one of the mission candidates for SELENE-2. We have conducted a preliminary simulation study on the anticipated k2 accuracy. With the assumed mission duration of about 3 months and the arc length of 14 days, the k2 accuracy is estimated to be better than 1 %, where the uncertainty is evaluated as 10 times the formal error considering the errors in the non-conservative force modeling and in the lander position. We carried out a feasibility study using Bayesian inversion on how well we can constrain the lunar internal structure by the geodetic data to be improved by SELENE-2. It will be shown that such improved geodetic data contribute to narrow the range of the plausible internal structure models, but there are still trade-offs among crust, mantle, and core structures. Preliminary simulation results will be presented to show that the accuracy of core structure estimation will be improved in consequence of better determination of the mantle structure by combining the geodetic data with the seismic data. References [1] Weber et al. (2011), Science, 331, 309-312, doi:10.1126/science.1199375 [2] Garcia eta l. (2011), PEPI, doi:10.1016/j.pepi.2011.06.015 [3] Williams et al. (2001), JGR, 106, E11, 27,933-27,968 [4] Khan and Mosegaard (2005), GRL, 32, L22203, doi:10.1029/2005GL023985
Erickson, Jon A; Jalaie, Mehran; Robertson, Daniel H; Lewis, Richard A; Vieth, Michal
2004-01-01
The key to success for computational tools used in structure-based drug design is the ability to accurately place or "dock" a ligand in the binding pocket of the target of interest. In this report we examine the effect of several factors on docking accuracy, including ligand and protein flexibility. To examine ligand flexibility in an unbiased fashion, a test set of 41 ligand-protein cocomplex X-ray structures were assembled that represent a diversity of size, flexibility, and polarity with respect to the ligands. Four docking algorithms, DOCK, FlexX, GOLD, and CDOCKER, were applied to the test set, and the results were examined in terms of the ability to reproduce X-ray ligand positions within 2.0A heavy atom root-mean-square deviation. Overall, each method performed well (>50% accuracy) but for all methods it was found that docking accuracy decreased substantially for ligands with eight or more rotatable bonds. Only CDOCKER was able to accurately dock most of those ligands with eight or more rotatable bonds (71% accuracy rate). A second test set of structures was gathered to examine how protein flexibility influences docking accuracy. CDOCKER was applied to X-ray structures of trypsin, thrombin, and HIV-1-protease, using protein structures bound to several ligands and also the unbound (apo) form. Docking experiments of each ligand to one "average" structure and to the apo form were carried out, and the results were compared to docking each ligand back to its originating structure. The results show that docking accuracy falls off dramatically if one uses an average or apo structure. In fact, it is shown that the drop in docking accuracy mirrors the degree to which the protein moves upon ligand binding.
Buchner, Lena; Güntert, Peter
2015-02-03
Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ford, Simon; Dosani, Maryam; Robinson, Ashley J; Campbell, G Claire; Ansermino, J Mark; Lim, Joanne; Lauder, Gillian R
2009-12-01
The ilioinguinal (II)/iliohypogastric (IH) nerve block is a safe, frequently used block that has been improved in efficacy and safety by the use of ultrasound guidance. We assessed the frequency with which pediatric anesthesiologists with limited experience with ultrasound-guided regional anesthesia could correctly identify anatomical structures within the inguinal region. Our primary outcome was to compare the frequency of correct identification of the transversus abdominis (TA) muscle with the frequency of correct identification of the II/IH nerves. We used 2 ultrasound machines with different capabilities to assess a potential equipment effect on success of structure identification and time taken for structure identification. Seven pediatric anesthesiologists with <6 mo experience with ultrasound-guided regional anesthesia performed a total of 127 scans of the II region in anesthetized children. The muscle planes and the II and IH nerves were identified and labeled. The ultrasound images were reviewed by a blinded expert to mark accuracy of structure identification and time taken for identification. Two ultrasound machines (Sonosite C180plus and Micromaxx, both from Sonosite, Bothell, WA) were used. There was no difference in the frequency of correct identification of the TA muscle compared with the II/IH nerves (chi(2) test, TA versus II, P = 0.45; TA versus IH, P = 0.50). Ultrasound machine selection did show a nonsignificant trend in improving correct II/IH nerve identification (II nerve chi(2) test, P = 0.02; IH nerve chi(2) test, P = 0.04; Bonferroni corrected significance 0.17) but not for the muscle planes (chi(2) test, P = 0.83) or time taken (1-way analysis of variance, P = 0.07). A curve of improving accuracy with number of scans was plotted, with reliability of TA recognition occurring after 14-15 scans and II/IH identification after 18 scans. We have demonstrated that although there is no difference in the overall accuracy of muscle plane versus II/IH nerve identification, the muscle planes are reliably identified after fewer scans of the inguinal region. We suggest that a reliable end point for the inexperienced practitioner of ultrasound-guided II/IH nerve block may be the TA/internal oblique plane where the nerves are reported to be found in 100% of cases.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.
2012-12-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.
Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs
2017-01-01
Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package. PMID:29107980
Improved segmentation of cerebellar structures in children
Narayanan, Priya Lakshmi; Boonazier, Natalie; Warton, Christopher; Molteno, Christopher D; Joseph, Jesuchristopher; Jacobson, Joseph L; Jacobson, Sandra W; Zöllei, Lilla; Meintjes, Ernesta M
2016-01-01
Background Consistent localization of cerebellar cortex in a standard coordinate system is important for functional studies and detection of anatomical alterations in studies of morphometry. To date, no pediatric cerebellar atlas is available. New method The probabilistic Cape Town Pediatric Cerebellar Atlas (CAPCA18) was constructed in the age-appropriate National Institute of Health Pediatric Database asymmetric template space using manual tracings of 16 cerebellar compartments in 18 healthy children (9–13 years) from Cape Town, South Africa. The individual atlases of the training subjects were also used to implement multi atlas label fusion using multi atlas majority voting (MAMV) and multi atlas generative model (MAGM) approaches. Segmentation accuracy in 14 test subjects was compared for each method to ‘gold standard’ manual tracings. Results Spatial overlap between manual tracings and CAPCA18 automated segmentation was 73% or higher for all lobules in both hemispheres, except VIIb and X. Automated segmentation using MAGM yielded the best segmentation accuracy over all lobules (mean Dice Similarity Coefficient 0.76; range 0.55–0.91). Comparison with existing methods In all lobules, spatial overlap of CAPCA18 segmentations with manual tracings was similar or higher than those obtained with SUIT (spatially unbiased infra-tentorial template), providing additional evidence of the benefits of an age appropriate atlas. MAGM segmentation accuracy was comparable to values reported recently by Park et al. (2014) in adults (across all lobules mean DSC = 0.73, range 0.40–0.89). Conclusions CAPCA18 and the associated multi atlases of the training subjects yield improved segmentation of cerebellar structures in children. PMID:26743973
Curved Thermopiezoelectric Shell Structures Modeled by Finite Element Analysis
NASA Technical Reports Server (NTRS)
Lee, Ho-Jun
2000-01-01
"Smart" structures composed of piezoelectric materials may significantly improve the performance of aeropropulsion systems through a variety of vibration, noise, and shape-control applications. The development of analytical models for piezoelectric smart structures is an ongoing, in-house activity at the NASA Glenn Research Center at Lewis Field focused toward the experimental characterization of these materials. Research efforts have been directed toward developing analytical models that account for the coupled mechanical, electrical, and thermal response of piezoelectric composite materials. Current work revolves around implementing thermal effects into a curvilinear-shell finite element code. This enhances capabilities to analyze curved structures and to account for coupling effects arising from thermal effects and the curved geometry. The current analytical model implements a unique mixed multi-field laminate theory to improve computational efficiency without sacrificing accuracy. The mechanics can model both the sensory and active behavior of piezoelectric composite shell structures. Finite element equations are being implemented for an eight-node curvilinear shell element, and numerical studies are being conducted to demonstrate capabilities to model the response of curved piezoelectric composite structures (see the figure).
Langó, Tamás; Róna, Gergely; Hunyadi-Gulyás, Éva; Turiák, Lilla; Varga, Julia; Dobson, László; Várady, György; Drahos, László; Vértessy, Beáta G; Medzihradszky, Katalin F; Szakács, Gergely; Tusnády, Gábor E
2017-02-13
Transmembrane proteins play crucial role in signaling, ion transport, nutrient uptake, as well as in maintaining the dynamic equilibrium between the internal and external environment of cells. Despite their important biological functions and abundance, less than 2% of all determined structures are transmembrane proteins. Given the persisting technical difficulties associated with high resolution structure determination of transmembrane proteins, additional methods, including computational and experimental techniques remain vital in promoting our understanding of their topologies, 3D structures, functions and interactions. Here we report a method for the high-throughput determination of extracellular segments of transmembrane proteins based on the identification of surface labeled and biotin captured peptide fragments by LC/MS/MS. We show that reliable identification of extracellular protein segments increases the accuracy and reliability of existing topology prediction algorithms. Using the experimental topology data as constraints, our improved prediction tool provides accurate and reliable topology models for hundreds of human transmembrane proteins.
I-SonReb: an improved NDT method to evaluate the in situ strength of carbonated concrete
NASA Astrophysics Data System (ADS)
Breccolotti, Marco; Bonfigli, Massimo F.
2015-10-01
Concrete strength evaluated in situ by means of the conventional SonReb method can be highly overestimated in presence of carbonation. This latter, in fact, is responsible for the physical and chemical alteration of the outer layer of concrete. As most of the existing concrete structures are subjected to carbonation, it is of high importance to overcome this problem. In this paper, an Improved SonReb method (I-SonReb) for carbonated concretes is proposed. It relies on the definition of a correction coefficient of the measured Rebound index as a function of the carbonated concrete cover thickness, an additional parameter to be measured during in situ testing campaigns. The usefulness of the method has been validated showing the improvement in the accuracy of concrete strength estimation from two sets of NDT experimental data collected from investigations on real structures.
Improved biliary detection and diagnosis through intelligent machine analysis.
Logeswaran, Rajasvaran
2012-09-01
This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study
NASA Technical Reports Server (NTRS)
Walter, Steven J.; Spilker, Thomas R.
1995-01-01
A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.
Feature instructions improve face-matching accuracy
Bindemann, Markus
2018-01-01
Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822
Rangachari, Pavani
2008-01-01
CONTEXT/PURPOSE: With the growing momentum toward hospital quality measurement and reporting by public and private health care payers, hospitals face increasing pressures to improve their medical record documentation and administrative data coding accuracy. This study explores the relationship between the organizational knowledge-sharing structure related to quality and hospital coding accuracy for quality measurement. Simultaneously, this study seeks to identify other leadership/management characteristics associated with coding for quality measurement. Drawing upon complexity theory, the literature on "professional complex systems" has put forth various strategies for managing change and turnaround in professional organizations. In so doing, it has emphasized the importance of knowledge creation and organizational learning through interdisciplinary networks. This study integrates complexity, network structure, and "subgoals" theories to develop a framework for knowledge-sharing network effectiveness in professional complex systems. This framework is used to design an exploratory and comparative research study. The sample consists of 4 hospitals, 2 showing "good coding" accuracy for quality measurement and 2 showing "poor coding" accuracy. Interviews and surveys are conducted with administrators and staff in the quality, medical staff, and coding subgroups in each facility. Findings of this study indicate that good coding performance is systematically associated with a knowledge-sharing network structure rich in brokerage and hierarchy (with leaders connecting different professional subgroups to each other and to the external environment), rather than in density (where everyone is directly connected to everyone else). It also implies that for the hospital organization to adapt to the changing environment of quality transparency, senior leaders must undertake proactive and unceasing efforts to coordinate knowledge exchange across physician and coding subgroups and connect these subgroups with the changing external environment.
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Toma, Milan; Jensen, Morten Ø; Einstein, Daniel R; Yoganathan, Ajit P; Cochran, Richard P; Kunzelman, Karyn S
2016-04-01
Numerical models of native heart valves are being used to study valve biomechanics to aid design and development of repair procedures and replacement devices. These models have evolved from simple two-dimensional approximations to complex three-dimensional, fully coupled fluid-structure interaction (FSI) systems. Such simulations are useful for predicting the mechanical and hemodynamic loading on implanted valve devices. A current challenge for improving the accuracy of these predictions is choosing and implementing modeling boundary conditions. In order to address this challenge, we are utilizing an advanced in vitro system to validate FSI conditions for the mitral valve system. Explanted ovine mitral valves were mounted in an in vitro setup, and structural data for the mitral valve was acquired with [Formula: see text]CT. Experimental data from the in vitro ovine mitral valve system were used to validate the computational model. As the valve closes, the hemodynamic data, high speed leaflet dynamics, and force vectors from the in vitro system were compared to the results of the FSI simulation computational model. The total force of 2.6 N per papillary muscle is matched by the computational model. In vitro and in vivo force measurements enable validating and adjusting material parameters to improve the accuracy of computational models. The simulations can then be used to answer questions that are otherwise not possible to investigate experimentally. This work is important to maximize the validity of computational models of not just the mitral valve, but any biomechanical aspect using computational simulation in designing medical devices.
Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J
2016-10-01
To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Huang, Wei; Yang, Xiao-xu; Han, Jun-feng; Wei, Yu; Zhang, Jing; Xie, Mei-lin; Yue, Peng
2016-01-01
High precision tracking platform of celestial navigation with control mirror servo structure form, to solve the disadvantages of big volume and rotational inertia, slow response speed, and so on. It improved the stability and tracking accuracy of platform. Due to optical sensor and mirror are installed on the middle-gimbal, stiffness and resonant frequency requirement for high. Based on the application of finite element modality analysis theory, doing Research on dynamic characteristics of the middle-gimbal, and ANSYS was used for the finite element dynamic emulator analysis. According to the result of the computer to find out the weak links of the structure, and Put forward improvement suggestions and reanalysis. The lowest resonant frequency of optimization middle-gimbal avoid the bandwidth of the platform servo mechanism, and much higher than the disturbance frequency of carrier aircraft, and reduces mechanical resonance of the framework. Reaching provides a theoretical basis for the whole machine structure optimization design of high-precision of autonomous Celestial navigation tracking mirror system.
Jasim, Sarah B; Li, Zhuo; Guest, Ellen E; Hirst, Jonathan D
2017-12-16
A fully quantitative theory connecting protein conformation and optical spectroscopy would facilitate deeper insights into biophysical and simulation studies of protein dynamics and folding. The web server DichroCalc (http://comp.chem.nottingham.ac.uk/dichrocalc) allows one to compute from first principles the electronic circular dichroism spectrum of a (modeled or experimental) protein structure or ensemble of structures. The regular, repeating, chiral nature of secondary structure elements leads to intense bands in the far-ultraviolet (UV). The near-UV bands are much weaker and have been challenging to compute theoretically. We report some advances in the accuracy of calculations in the near-UV, realized through the consideration of the vibrational structure of the electronic transitions of aromatic side chains. The improvements have been assessed over a set of diverse proteins. We illustrate them using bovine pancreatic trypsin inhibitor and present a new, detailed analysis of the interactions which are most important in determining the near-UV circular dichroism spectrum. Copyright © 2018. Published by Elsevier Ltd.
Masdrakis, Vasilios G; Legaki, Emilia-Maria; Vaidakis, Nikolaos; Ploumpidis, Dimitrios; Soldatos, Constantin R; Papageorgiou, Charalambos; Papadimitriou, George N; Oulis, Panagiotis
2015-07-01
Increased heartbeat perception accuracy (HBP-accuracy) may contribute to the pathogenesis of Panic Disorder (PD) without or with Agoraphobia (PDA). Extant research suggests that HBP-accuracy is a rather stable individual characteristic, moreover predictive of worse long-term outcome in PD/PDA patients. However, it remains still unexplored whether HBP-accuracy adversely affects patients' short-term outcome after structured cognitive behaviour therapy (CBT) for PD/PDA. To explore the potential association between HBP-accuracy and the short-term outcome of a structured brief-CBT for the acute treatment of PDA. We assessed baseline HBP-accuracy using the "mental tracking" paradigm in 25 consecutive medication-free, CBT-naive PDA patients. Patients then underwent a structured, protocol-based, 8-session CBT by the same therapist. Outcome measures included the number of panic attacks during the past week, the Agoraphobic Cognitions Questionnaire (ACQ), and the Mobility Inventory-Alone subscale (MI-alone). No association emerged between baseline HBP-accuracy and posttreatment changes concerning number of panic attacks. Moreover, higher baseline HBP-accuracy was associated with significantly larger reductions in the scores of the ACQ and the MI-alone scales. Our results suggest that in PDA patients undergoing structured brief-CBT for the acute treatment of their symptoms, higher baseline HBP-accuracy is not associated with worse short-term outcome concerning panic attacks. Furthermore, higher baseline HBP-accuracy may be associated with enhanced therapeutic gains in agoraphobic cognitions and behaviours.
NASA Astrophysics Data System (ADS)
Blair, J. B.; Rabine, D.; Hofton, M. A.; Citrin, E.; Luthcke, S. B.; Misakonis, A.; Wake, S.
2015-12-01
Full waveform laser altimetry has demonstrated its ability to capture highly-accurate surface topography and vertical structure (e.g. vegetation height and structure) even in the most challenging conditions. NASA's high-altitude airborne laser altimeter, LVIS (the Land Vegetation, and Ice Sensor) has produced high-accuracy surface maps over a wide variety of science targets for the last 2 decades. Recently NASA has funded the transition of LVIS into a full-time NASA airborne Facility instrument to increase the amount and quality of the data and to decrease the end-user costs, to expand the utilization and application of this unique sensor capability. Based heavily on the existing LVIS sensor design, the Facility LVIS instrument includes numerous improvements for reliability, resolution, real-time performance monitoring and science products, decreased operational costs, and improved data turnaround time and consistency. The development of this Facility instrument is proceeding well and it is scheduled to begin operations testing in mid-2016. A comprehensive description of the LVIS Facility capability will be presented along with several mission scenarios and science applications examples. The sensor improvements included increased spatial resolution (footprints as small as 5 m), increased range precision (sub-cm single shot range precision), expanded dynamic range, improved detector sensitivity, operational autonomy, real-time flight line tracking, and overall increased reliability and sensor calibration stability. The science customer mission planning and data product interface will be discussed. Science applications of the LVIS Facility include: cryosphere, territorial ecology carbon cycle, hydrology, solid earth and natural hazards, and biodiversity.
Bin recycling strategy for improving the histogram precision on GPU
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Rodríguez-Vázquez, Juan José; Vega-Rodríguez, Miguel A.
2016-07-01
Histogram is an easily comprehensible way to present data and analyses. In the current scientific context with access to large volumes of data, the processing time for building histogram has dramatically increased. For this reason, parallel construction is necessary to alleviate the impact of the processing time in the analysis activities. In this scenario, GPU computing is becoming widely used for reducing until affordable levels the processing time of histogram construction. Associated to the increment of the processing time, the implementations are stressed on the bin-count accuracy. Accuracy aspects due to the particularities of the implementations are not usually taken into consideration when building histogram with very large data sets. In this work, a bin recycling strategy to create an accuracy-aware implementation for building histogram on GPU is presented. In order to evaluate the approach, this strategy was applied to the computation of the three-point angular correlation function, which is a relevant function in Cosmology for the study of the Large Scale Structure of Universe. As a consequence of the study a high-accuracy implementation for histogram construction on GPU is proposed.
Hengartner, M P; Heekeren, K; Dvorsky, D; Walitza, S; Rössler, W; Theodoridou, A
2017-09-01
The aim of this study was to critically examine the prognostic validity of various clinical high-risk (CHR) criteria alone and in combination with additional clinical characteristics. A total of 188 CHR positive persons from the region of Zurich, Switzerland (mean age 20.5 years; 60.2% male), meeting ultra high-risk (UHR) and/or basic symptoms (BS) criteria, were followed over three years. The test battery included the Structured Interview for Prodromal Syndromes (SIPS), verbal IQ and many other screening tools. Conversion to psychosis was defined according to ICD-10 criteria for schizophrenia (F20) or brief psychotic disorder (F23). Altogether n=24 persons developed manifest psychosis within three years and according to Kaplan-Meier survival analysis, the projected conversion rate was 17.5%. The predictive accuracy of UHR was statistically significant but poor (area under the curve [AUC]=0.65, P<.05), whereas BS did not predict psychosis beyond mere chance (AUC=0.52, P=.730). Sensitivity and specificity were 0.83 and 0.47 for UHR, and 0.96 and 0.09 for BS. UHR plus BS achieved an AUC=0.66, with sensitivity and specificity of 0.75 and 0.56. In comparison, baseline antipsychotic medication yielded a predictive accuracy of AUC=0.62 (sensitivity=0.42; specificity=0.82). A multivariable prediction model comprising continuous measures of positive symptoms and verbal IQ achieved a substantially improved prognostic accuracy (AUC=0.85; sensitivity=0.86; specificity=0.85; positive predictive value=0.54; negative predictive value=0.97). We showed that BS have no predictive accuracy beyond chance, while UHR criteria poorly predict conversion to psychosis. Combining BS with UHR criteria did not improve the predictive accuracy of UHR alone. In contrast, dimensional measures of both positive symptoms and verbal IQ showed excellent prognostic validity. A critical re-thinking of binary at-risk criteria is necessary in order to improve the prognosis of psychotic disorders. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
On the importance of cotranscriptional RNA structure formation
Lai, Daniel; Proctor, Jeff R.; Meyer, Irmtraud M.
2013-01-01
The expression of genes, both coding and noncoding, can be significantly influenced by RNA structural features of their corresponding transcripts. There is by now mounting experimental and some theoretical evidence that structure formation in vivo starts during transcription and that this cotranscriptional folding determines the functional RNA structural features that are being formed. Several decades of research in bioinformatics have resulted in a wide range of computational methods for predicting RNA secondary structures. Almost all state-of-the-art methods in terms of prediction accuracy, however, completely ignore the process of structure formation and focus exclusively on the final RNA structure. This review hopes to bridge this gap. We summarize the existing evidence for cotranscriptional folding and then review the different, currently used strategies for RNA secondary-structure prediction. Finally, we propose a range of ideas on how state-of-the-art methods could be potentially improved by explicitly capturing the process of cotranscriptional structure formation. PMID:24131802
New Improvements in Magnetic Measurements Laboratory of the ALBA Synchrotron Facility
NASA Astrophysics Data System (ADS)
Campmany, Josep; Marcos, Jordi; Massana, Valentí
ALBA synchrotron facility has a complete insertion devices (ID) laboratory to characterize and produce magnetic devices needed to satisfy the requirements of ALBA's user community. The laboratory is equipped with a Hall-probe bench working in on-the-fly measurement mode allowing the measurement of field maps of big magnetic structures with high accuracy, both in magnetic field magnitude and position. The whole control system of this bench is based on TANGO. The Hall probe calibration range extends between sub-Gauss to 2 Tesla with an accuracy of 100 ppm. Apart from the Hall probe bench, the ID laboratory has a flipping coil bench dedicated to measuring field integrals and a Helmholtz coil bench specially designed to characterize permanent magnet blocks. Also, a fixed stretched wire bench is used to measure field integrals of magnet sets. This device is specifically dedicated to ID construction. Finally, the laboratory is equipped with a rotating coil bench, specially designed for measuring multipolar devices used in accelerators, such as quadrupoles, sextupoles, etc. Recent improvements of the magnetic measurements laboratory of ALBA synchrotron include the design and manufacturing of very thin 3D Hall probe heads, the design and manufacturing of coil sensors for the Rotating coil bench based on multilayered PCB, and the improvement of calibration methodology in order to improve the accuracy of the measurements. ALBA magnetic measurements laboratory is open for external contracts, and has been widely used by national and international institutes such as CERN, ESRF or CIEMAT, as well as magnet manufacturing companies, such as ANTEC, TESLA and I3 M. In this paper, we will present the main features of the measurement benches as well as improvements made so far.
Ruth, Veikko; Kolditz, Daniel; Steiding, Christian; Kalender, Willi A
2017-06-01
The performance of metal artifact reduction (MAR) methods in x-ray computed tomography (CT) suffers from incorrect identification of metallic implants in the artifact-affected volumetric images. The aim of this study was to investigate potential improvements of state-of-the-art MAR methods by using prior information on geometry and material of the implant. The influence of a novel prior knowledge-based segmentation (PS) compared with threshold-based segmentation (TS) on 2 MAR methods (linear interpolation [LI] and normalized-MAR [NORMAR]) was investigated. The segmentation is the initial step of both MAR methods. Prior knowledge-based segmentation uses 3-dimensional registered computer-aided design (CAD) data as prior knowledge to estimate the correct position and orientation of the metallic objects. Threshold-based segmentation uses an adaptive threshold to identify metal. Subsequently, for LI and NORMAR, the selected voxels are projected into the raw data domain to mark metal areas. Attenuation values in these areas are replaced by different interpolation schemes followed by a second reconstruction. Finally, the previously selected metal voxels are replaced by the metal voxels determined by PS or TS in the initial reconstruction. First, we investigated in an elaborate phantom study if the knowledge of the exact implant shape extracted from the CAD data provided by the manufacturer of the implant can improve the MAR result. Second, the leg of a human cadaver was scanned using a clinical CT system before and after the implantation of an artificial knee joint. The results were compared regarding segmentation accuracy, CT number accuracy, and the restoration of distorted structures. The use of PS improved the efficacy of LI and NORMAR compared with TS. Artifacts caused by insufficient segmentation were reduced, and additional information was made available within the projection data. The estimation of the implant shape was more exact and not dependent on a threshold value. Consequently, the visibility of structures was improved when comparing the new approach to the standard method. This was further confirmed by improved CT value accuracy and reduced image noise. The PS approach based on prior implant information provides image quality which is superior to TS-based MAR, especially when the shape of the metallic implant is complex. The new approach can be useful for improving MAR methods and dose calculations within radiation therapy based on the MAR corrected CT images.
Large-area settlement pattern recognition from Landsat-8 data
NASA Astrophysics Data System (ADS)
Wieland, Marc; Pittore, Massimiliano
2016-09-01
The study presents an image processing and analysis pipeline that combines object-based image analysis with a Support Vector Machine to derive a multi-layered settlement product from Landsat-8 data over large areas. 43 image scenes are processed over large parts of Central Asia (Southern Kazakhstan, Kyrgyzstan, Tajikistan and Eastern Uzbekistan). The main tasks tackled by this work include built-up area identification, settlement type classification and urban structure types pattern recognition. Besides commonly used accuracy assessments of the resulting map products, thorough performance evaluations are carried out under varying conditions to tune algorithm parameters and assess their applicability for the given tasks. As part of this, several research questions are being addressed. In particular the influence of the improved spatial and spectral resolution of Landsat-8 on the SVM performance to identify built-up areas and urban structure types are evaluated. Also the influence of an extended feature space including digital elevation model features is tested for mountainous regions. Moreover, the spatial distribution of classification uncertainties is analyzed and compared to the heterogeneity of the building stock within the computational unit of the segments. The study concludes that the information content of Landsat-8 images is sufficient for the tested classification tasks and even detailed urban structures could be extracted with satisfying accuracy. Freely available ancillary settlement point location data could further improve the built-up area classification. Digital elevation features and pan-sharpening could, however, not significantly improve the classification results. The study highlights the importance of dynamically tuned classifier parameters, and underlines the use of Shannon entropy computed from the soft answers of the SVM as a valid measure of the spatial distribution of classification uncertainties.
Comparison of Three Information Sources for Smoking Information in Electronic Health Records
Wang, Liwei; Ruan, Xiaoyang; Yang, Ping; Liu, Hongfang
2016-01-01
OBJECTIVE The primary aim was to compare independent and joint performance of retrieving smoking status through different sources, including narrative text processed by natural language processing (NLP), patient-provided information (PPI), and diagnosis codes (ie, International Classification of Diseases, Ninth Revision [ICD-9]). We also compared the performance of retrieving smoking strength information (ie, heavy/light smoker) from narrative text and PPI. MATERIALS AND METHODS Our study leveraged an existing lung cancer cohort for smoking status, amount, and strength information, which was manually chart-reviewed. On the NLP side, smoking-related electronic medical record (EMR) data were retrieved first. A pattern-based smoking information extraction module was then implemented to extract smoking-related information. After that, heuristic rules were used to obtain smoking status-related information. Smoking information was also obtained from structured data sources based on diagnosis codes and PPI. Sensitivity, specificity, and accuracy were measured using patients with coverage (ie, the proportion of patients whose smoking status/strength can be effectively determined). RESULTS NLP alone has the best overall performance for smoking status extraction (patient coverage: 0.88; sensitivity: 0.97; specificity: 0.70; accuracy: 0.88); combining PPI with NLP further improved patient coverage to 0.96. ICD-9 does not provide additional improvement to NLP and its combination with PPI. For smoking strength, combining NLP with PPI has slight improvement over NLP alone. CONCLUSION These findings suggest that narrative text could serve as a more reliable and comprehensive source for obtaining smoking-related information than structured data sources. PPI, the readily available structured data, could be used as a complementary source for more comprehensive patient coverage. PMID:27980387
Harold S.J. Zald; Janet L. Ohmann; Heather M. Roberts; Matthew J. Gregory; Emilie B. Henderson; Robert J. McGaughey; Justin Braaten
2014-01-01
This study investigated how lidar-derived vegetation indices, disturbance history from Landsat time series (LTS) imagery, plot location accuracy, and plot size influenced accuracy of statistical spatial models (nearest-neighbor imputation maps) of forest vegetation composition and structure. Nearest-neighbor (NN) imputation maps were developed for 539,000 ha in the...
Scott, Gregory G; Margulies, Susan S; Coats, Brittany
2016-10-01
Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.
NASA Astrophysics Data System (ADS)
Duraipandian, Shiyamala; Zheng, Wei; Ng, Joseph; Low, Jeffrey J. H.; Ilancheran, A.; Huang, Zhiwei
2012-03-01
Raman spectroscopy is a unique analytical probe for molecular vibration and is capable of providing specific spectroscopic fingerprints of molecular compositions and structures of biological tissues. The aim of this study is to improve the classification accuracy of cervical precancer by characterizing the variations in the normal high wavenumber (HW - 2800-3700cm-1) Raman spectra arising from the menopausal status of the cervix. A rapidacquisition near-infrared (NIR) Raman spectroscopic system was used for in vivo tissue Raman measurements at 785 nm excitation. Individual HW Raman spectrum was measured with a 5s exposure time from both normal and precancer tissue sites of 15 patients recruited. The acquired Raman spectra were stratified based on the menopausal status of the cervix before the data analysis. Significant differences were noticed in Raman intensities of prominent band at 2924 cm-1 (CH3 stretching of proteins) and the broad water Raman band (in the 3100-3700 cm-1 range) with a peak at 3390 cm-1 in normal and dysplasia cervical tissue sites. Multivariate diagnostic decision algorithm based on principal component analysis (PCA) and linear discriminant analysis (LDA) was utilized to successfully differentiate the normal and precancer cervical tissue sites. By considering the variations in the Raman spectra of normal cervix due to the hormonal or menopausal status of women, the diagnostic accuracy was improved from 71 to 91%. By incorporating these variations prior to tissue classification, we can significantly improve the accuracy of cervical precancer detection using HW Raman spectroscopy.
ECOD: new developments in the evolutionary classification of domains
Schaeffer, R. Dustin; Liao, Yuxing; Cheng, Hua; Grishin, Nick V.
2017-01-01
Evolutionary Classification Of protein Domains (ECOD) (http://prodata.swmed.edu/ecod) comprehensively classifies protein with known spatial structures maintained by the Protein Data Bank (PDB) into evolutionary groups of protein domains. ECOD relies on a combination of automatic and manual weekly updates to achieve its high accuracy and coverage with a short update cycle. ECOD classifies the approximately 120 000 depositions of the PDB into more than 500 000 domains in ∼3400 homologous groups. We show the performance of the weekly update pipeline since the release of ECOD, describe improvements to the ECOD website and available search options, and discuss novel structures and homologous groups that have been classified in the recent updates. Finally, we discuss the future directions of ECOD and further improvements planned for the hierarchy and update process. PMID:27899594
Phase contrast portal imaging using synchrotron radiation
NASA Astrophysics Data System (ADS)
Umetani, K.; Kondoh, T.
2014-07-01
Microbeam radiation therapy is an experimental form of radiation treatment with great potential to improve the treatment of many types of cancer. We applied a synchrotron radiation phase contrast technique to portal imaging to improve targeting accuracy for microbeam radiation therapy in experiments using small animals. An X-ray imaging detector was installed 6.0 m downstream from an object to produce a high-contrast edge enhancement effect in propagation-based phase contrast imaging. Images of a mouse head sample were obtained using therapeutic white synchrotron radiation with a mean beam energy of 130 keV. Compared to conventional portal images, remarkably clear images of bones surrounding the cerebrum were acquired in an air environment for positioning brain lesions with respect to the skull structure without confusion with overlapping surface structures.
Overhead throwing injuries of the shoulder and elbow.
Anderson, Mark W; Alford, Bennett A
2010-11-01
Injuries to the shoulder and elbow are common in athletes involved in sporting activities that require overhead motion of the arm. An understanding of the forces involved in the throwing motion, the anatomic structures most at risk, and the magnetic resonance imaging appearances of the most common associated injuries can help to improve diagnostic accuracy when interpreting imaging studies in these patients. Copyright © 2010 Elsevier Inc. All rights reserved.
Advancing density functional theory to finite temperatures: methods and applications in steel design
NASA Astrophysics Data System (ADS)
Hickel, T.; Grabowski, B.; Körmann, F.; Neugebauer, J.
2012-02-01
The performance of materials such as steels, their high strength and formability, is based on an impressive variety of competing mechanisms on the microscopic/atomic scale (e.g. dislocation gliding, solid solution hardening, mechanical twinning or structural phase transformations). Whereas many of the currently available concepts to describe these mechanisms are based on empirical and experimental data, it becomes more and more apparent that further improvement of materials needs to be based on a more fundamental level. Recent progress for methods based on density functional theory (DFT) now makes the exploration of chemical trends, the determination of parameters for phenomenological models and the identification of new routes for the optimization of steel properties feasible. A major challenge in applying these methods to a true materials design is, however, the inclusion of temperature-driven effects on the desired properties. Therefore, a large range of computational tools has been developed in order to improve the capability and accuracy of first-principles methods in determining free energies. These combine electronic, vibrational and magnetic effects as well as structural defects in an integrated approach. Based on these simulation tools, one is now able to successfully predict mechanical and thermodynamic properties of metals with a hitherto not achievable accuracy.
Shioya, Nobutaka; Shimoaka, Takafumi; Murdey, Richard; Hasegawa, Takeshi
2017-06-01
Infrared (IR) p-polarized multiple-angle incidence resolution spectrometry (pMAIRS) is a powerful tool for analyzing the molecular orientation in an organic thin film. In particular, pMAIRS works powerfully for a thin film with a highly rough surface irrespective of degree of the crystallinity. Recently, the optimal experimental condition has comprehensively been revealed, with which the accuracy of the analytical results has largely been improved. Regardless, some unresolved matters still remain. A structurally isotropic sample, for example, yields different peak intensities in the in-plane and out-of-plane spectra. In the present study, this effect is shown to be due to the refractive index of the sample film and a correction factor has been developed using rigorous theoretical methods. As a result, with the use of the correction factor, organic materials having atypical refractive indices such as perfluoroalkyl compounds ( n = 1.35) and fullerene ( n = 1.83) can be analyzed with high accuracy comparable to a compound having a normal refractive index of approximately 1.55. With this improved technique, we are also ready for discriminating an isotropic structure from an oriented sample having the magic angle of 54.7°.
Hickel, T; Grabowski, B; Körmann, F; Neugebauer, J
2012-02-08
The performance of materials such as steels, their high strength and formability, is based on an impressive variety of competing mechanisms on the microscopic/atomic scale (e.g. dislocation gliding, solid solution hardening, mechanical twinning or structural phase transformations). Whereas many of the currently available concepts to describe these mechanisms are based on empirical and experimental data, it becomes more and more apparent that further improvement of materials needs to be based on a more fundamental level. Recent progress for methods based on density functional theory (DFT) now makes the exploration of chemical trends, the determination of parameters for phenomenological models and the identification of new routes for the optimization of steel properties feasible. A major challenge in applying these methods to a true materials design is, however, the inclusion of temperature-driven effects on the desired properties. Therefore, a large range of computational tools has been developed in order to improve the capability and accuracy of first-principles methods in determining free energies. These combine electronic, vibrational and magnetic effects as well as structural defects in an integrated approach. Based on these simulation tools, one is now able to successfully predict mechanical and thermodynamic properties of metals with a hitherto not achievable accuracy.
NASA Astrophysics Data System (ADS)
Flügge, Jens; Köning, Rainer; Schötka, Eugen; Weichert, Christoph; Köchert, Paul; Bosse, Harald; Kunzmann, Horst
2014-12-01
The paper describes recent improvements of Physikalisch-Technische Bundesanstalt's (PTB) reference measuring instrument for length graduations, the so-called nanometer comparator, intended to achieve a measurement uncertainty in the domain of 1 nm for a length up to 300 mm. The improvements are based on the design and realization of a new sample carriage, integrated into the existing structure and the optimization of coupling this new device to the vacuum interferometer, by which the length measuring range of approximately 540 mm with sub-nm resolution is given. First, measuring results of the enhanced nanometer comparator are presented and discussed, which show the improvements of the measuring capabilities and verify the step toward the sub-nm accuracy level.
Accuracy analysis of point cloud modeling for evaluating concrete specimens
NASA Astrophysics Data System (ADS)
D'Amico, Nicolas; Yu, Tzuyang
2017-04-01
Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.
NASA Astrophysics Data System (ADS)
Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju
2018-02-01
Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.
Quantitative mouse brain phenotyping based on single and multispectral MR protocols
Badea, Alexandra; Gewalt, Sally; Avants, Brian B.; Cook, James J.; Johnson, G. Allan
2013-01-01
Sophisticated image analysis methods have been developed for the human brain, but such tools still need to be adapted and optimized for quantitative small animal imaging. We propose a framework for quantitative anatomical phenotyping in mouse models of neurological and psychiatric conditions. The framework encompasses an atlas space, image acquisition protocols, and software tools to register images into this space. We show that a suite of segmentation tools (Avants, Epstein et al., 2008) designed for human neuroimaging can be incorporated into a pipeline for segmenting mouse brain images acquired with multispectral magnetic resonance imaging (MR) protocols. We present a flexible approach for segmenting such hyperimages, optimizing registration, and identifying optimal combinations of image channels for particular structures. Brain imaging with T1, T2* and T2 contrasts yielded accuracy in the range of 83% for hippocampus and caudate putamen (Hc and CPu), but only 54% in white matter tracts, and 44% for the ventricles. The addition of diffusion tensor parameter images improved accuracy for large gray matter structures (by >5%), white matter (10%), and ventricles (15%). The use of Markov random field segmentation further improved overall accuracy in the C57BL/6 strain by 6%; so Dice coefficients for Hc and CPu reached 93%, for white matter 79%, for ventricles 68%, and for substantia nigra 80%. We demonstrate the segmentation pipeline for the widely used C57BL/6 strain, and two test strains (BXD29, APP/TTA). This approach appears promising for characterizing temporal changes in mouse models of human neurological and psychiatric conditions, and may provide anatomical constraints for other preclinical imaging, e.g. fMRI and molecular imaging. This is the first demonstration that multiple MR imaging modalities combined with multivariate segmentation methods lead to significant improvements in anatomical segmentation in the mouse brain. PMID:22836174
NASA Astrophysics Data System (ADS)
Fukuda, Y.; Nogi, Y.; Matsuzaki, K.
2012-12-01
Syowa is the Japanese Antarctic wintering station in Lützow-Holm Bay, East Antarctica. The area around the station is considered to be a key for investigating the formation of Gondwana, because reconstruction models suggest a junction of the continents locates in the area. It is also important from a glaciological point of view, because there locates the Shirase Glacier, one of the major glaciers in Antarctica, near the station. Therefore the Japanese Antarctic Research Expedition (JARE) has been conducting in-situ gravity measurements in the area for a long period. The data sets accumulated are land gravity data since 1967, surface ship data since 1985, and airborne gravity data in 2006. However these in-situ gravity data usually suffered from the effects of instrumental drifts and lack of reference points, their accuracies are decreasing toward the longer wavelength more than several tens km. In particular in Antarctica where very few gravity reference points are available, the long wavelength accuracy and/or consistency among the data sets are quite limited. GOCE (Gravity field and steady-state Ocean Circulation Explorer) satellite launched in March 2009 by ESA (European Space Agency) aims at improving static gravity fields, in particular at short wavelengths. In addition to its low-altitude orbit (250km), the sensitive gravity gradiometer installed is expected to reveal 1 mgal gravity anomalies at the spatial resolution of 100km (half wavelength). Actually recently released GOCE EGMs (Earth Gravity Models) have improved the accuracy of the static gravity filed tremendously. These EGMs are expected to serve as the long wavelength references for the in-situ gravity data. Thus, firstly, we aims at determining an improved gravity fields around Syowa by combining the JARE gravity data and the recent EGMs. And then, using the gravity anomalies, we determine the subsurface density structures. We also evaluated the impacts of the EGMs for estimating the density structures.
Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision
Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao
2015-01-01
In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863
Neumann, Marcus A.
2017-01-01
Motional averaging has been proven to be significant in predicting the chemical shifts in ab initio solid-state NMR calculations, and the applicability of motional averaging with molecular dynamics has been shown to depend on the accuracy of the molecular mechanical force field. The performance of a fully automatically generated tailor-made force field (TMFF) for the dynamic aspects of NMR crystallography is evaluated and compared with existing benchmarks, including static dispersion-corrected density functional theory calculations and the COMPASS force field. The crystal structure of free base cocaine is used as an example. The results reveal that, even though the TMFF outperforms the COMPASS force field for representing the energies and conformations of predicted structures, it does not give significant improvement in the accuracy of NMR calculations. Further studies should direct more attention to anisotropic chemical shifts and development of the method of solid-state NMR calculations. PMID:28250956
Efficient Wide Baseline Structure from Motion
NASA Astrophysics Data System (ADS)
Michelini, Mario; Mayer, Helmut
2016-06-01
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
Assessment of existing Sierra/Fuego capabilities related to grid-to-rod-fretting (GTRF).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Daniel Zack; Rodriguez, Salvador B.
2011-06-01
The following report presents an assessment of existing capabilities in Sierra/Fuego applied to modeling several aspects of grid-to-rod-fretting (GTRF) including: fluid dynamics, heat transfer, and fluid-structure interaction. We compare the results of a number of Fuego simulations with relevant sources in the literature to evaluate the accuracy, efficiency, and robustness of using Fuego to model the aforementioned aspects. Comparisons between flow domains that include the full fuel rod length vs. a subsection of the domain near the spacer show that tremendous efficiency gains can be obtained by truncating the domain without loss of accuracy. Thermal analysis reveals the extent tomore » which heat transfer from the fuel rods to the coolant is improved by the swirling flow created by the mixing vanes. Lastly, coupled fluid-structure interaction analysis shows that the vibrational modes of the fuel rods filter out high frequency turbulent pressure fluctuations. In general, these results allude to interesting phenomena for which further investigation could be quite fruitful.« less
Dimensional measurement of micro parts with high aspect ratio in HIT-UOI
NASA Astrophysics Data System (ADS)
Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin
2016-11-01
Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.
Park, Ji Eun; Park, Bumwoo; Kim, Sang Joon; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun
2017-01-01
To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal ( p < 0.001) and supramarginal gyrus ( p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease.
A fast RCS accuracy assessment method for passive radar calibrators
NASA Astrophysics Data System (ADS)
Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI
2016-10-01
In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
Improving critical thinking and clinical reasoning with a continuing education course.
Cruz, Dina Monteiro; Pimenta, Cibele Mattos; Lunney, Margaret
2009-03-01
Continuing education courses related to critical thinking and clinical reasoning are needed to improve the accuracy of diagnosis. This study evaluated a 4-day, 16-hour continuing education course conducted in Brazil.Thirty-nine nurses completed a pretest and a posttest consisting of two written case studies designed to measure the accuracy of nurses' diagnoses. There were significant differences in accuracy from pretest to posttest for case 1 (p = .008) and case 2 (p = .042) and overall (p = .001). Continuing education courses should be implemented to improve the accuracy of nurses' diagnoses.
Schneider, Sebastian; Provasi, Davide; Filizola, Marta
2015-01-01
Major advances in G Protein-Coupled Receptor (GPCR) structural biology over the past few years have yielded a significant number of high-resolution crystal structures for several different receptor subtypes. This dramatic increase in GPCR structural information has underscored the use of automated docking algorithms for the discovery of novel ligands that can eventually be developed into improved therapeutics. However, these algorithms are often unable to discriminate between different, yet energetically similar, poses because of their relatively simple scoring functions. Here, we describe a metadynamics-based approach to study the dynamic process of ligand binding to/unbinding from GPCRs with a higher level of accuracy and yet satisfying efficiency. PMID:26260607
Position calibration of a 3-DOF hand-controller with hybrid structure
NASA Astrophysics Data System (ADS)
Zhu, Chengcheng; Song, Aiguo
2017-09-01
A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.
An efficient semi-supervised community detection framework in social networks.
Li, Zhen; Gong, Yong; Pan, Zhisong; Hu, Guyu
2017-01-01
Community detection is an important tasks across a number of research fields including social science, biology, and physics. In the real world, topology information alone is often inadequate to accurately find out community structure due to its sparsity and noise. The potential useful prior information such as pairwise constraints which contain must-link and cannot-link constraints can be obtained from domain knowledge in many applications. Thus, combining network topology with prior information to improve the community detection accuracy is promising. Previous methods mainly utilize the must-link constraints while cannot make full use of cannot-link constraints. In this paper, we propose a semi-supervised community detection framework which can effectively incorporate two types of pairwise constraints into the detection process. Particularly, must-link and cannot-link constraints are represented as positive and negative links, and we encode them by adding different graph regularization terms to penalize closeness of the nodes. Experiments on multiple real-world datasets show that the proposed framework significantly improves the accuracy of community detection.
CFRP composite mirrors for space telescopes and their micro-dimensional stability
NASA Astrophysics Data System (ADS)
Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo
2010-07-01
Ultra-lightweight and high-accuracy CFRP (carbon fiber reinforced plastics) mirrors for space telescopes were fabricated to demonstrate their feasibility for light wavelength applications. The CTE (coefficient of thermal expansion) of the all- CFRP sandwich panels was tailored to be smaller than 1×10-7/K. The surface accuracy of mirrors of 150 mm in diameter was 1.8 um RMS as fabricated and the surface smoothness was improved to 20 nm RMS by using a replica technique. Moisture expansion was considered the largest in un-predictable surface preciseness errors. The moisture expansion affected not only homologous shape change but also out-of-plane distortion especially in unsymmetrical compositions. Dimensional stability due to the moisture expansion was compared with a structural mathematical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, M. J.; Brunthaler, A.; Menten, K. M.
The BeSSeL Survey is mapping the spiral structure of the Milky Way by measuring trigonometric parallaxes of hundreds of maser sources associated with high-mass star formation. While parallax techniques for water masers at high frequency (22 GHz) have been well documented, recent observations of methanol masers at lower frequency (6.7 GHz) have revealed astrometric issues associated with signal propagation through the ionosphere that could significantly limit parallax accuracy. These problems displayed as a “parallax gradient” on the sky when measured against different background quasars. We present an analysis method in which we generate position data relative to an “artificial quasar”more » at the target maser position at each epoch. Fitting parallax to these data can significantly mitigate the problems and improve parallax accuracy.« less
Problem of unity of measurements in ensuring safety of hydraulic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kheifits, V.Z.; Markov, A.I.; Braitsev, V.V.
1994-07-01
Ensuring the safety of hydraulic structures (HSs) is not only an industry but also a national and global concern, since failure of large water impounding structures can entail large losses of lives and enormous material losses related to destruction downstream. The main information on the degree of safety of a structure is obtained by comparing information about the actual state of the structure obtained on the basis of measurements in key zones of the structure with the predicted state on basis of the design model used when designing the structure for given conditions of external actions. Numerous, from hundreds tomore » thousands, string type transducers are placed in large HSs. This system of transducers monitor the stress-strain rate, seepage, and thermal regimes. These measurements are supported by the State Standards Committee which certifies the accuracy of the checking methods. To improve the instrumental monitoring of HSs, the author recommends: Calibration of methods and means of reliable diagnosis for each measuring channel in the HS, improvements to reduce measurement error, support for the system software programs, and development of appropriate standards for the design and examination of HSs.« less
CONFOLD2: improved contact-driven ab initio protein structure modeling.
Adhikari, Badri; Cheng, Jianlin
2018-01-25
Contact-guided protein structure prediction methods are becoming more and more successful because of the latest advances in residue-residue contact prediction. To support contact-driven structure prediction, effective tools that can quickly build tertiary structural models of good quality from predicted contacts need to be developed. We develop an improved contact-driven protein modelling method, CONFOLD2, and study how it may be effectively used for ab initio protein structure prediction with predicted contacts as input. It builds models using various subsets of input contacts to explore the fold space under the guidance of a soft square energy function, and then clusters the models to obtain the top five models. CONFOLD2 obtains an average reconstruction accuracy of 0.57 TM-score for the 150 proteins in the PSICOV contact prediction dataset. When benchmarked on the CASP11 contacts predicted using CONSIP2 and CASP12 contacts predicted using Raptor-X, CONFOLD2 achieves a mean TM-score of 0.41 on both datasets. CONFOLD2 allows to quickly generate top five structural models for a protein sequence when its secondary structures and contacts predictions at hand. The source code of CONFOLD2 is publicly available at https://github.com/multicom-toolbox/CONFOLD2/ .
NASA Astrophysics Data System (ADS)
Huang, Xiaokun; Zhang, You; Wang, Jing
2018-02-01
Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.
Predicting Gene Structures from Multiple RT-PCR Tests
NASA Astrophysics Data System (ADS)
Kováč, Jakub; Vinař, Tomáš; Brejová, Broňa
It has been demonstrated that the use of additional information such as ESTs and protein homology can significantly improve accuracy of gene prediction. However, many sources of external information are still being omitted from consideration. Here, we investigate the use of product lengths from RT-PCR experiments in gene finding. We present hardness results and practical algorithms for several variants of the problem and apply our methods to a real RT-PCR data set in the Drosophila genome. We conclude that the use of RT-PCR data can improve the sensitivity of gene prediction and locate novel splicing variants.
[Research progress of three-dimensional digital model for repair and reconstruction of knee joint].
Tong, Lu; Li, Yanlin; Hu, Meng
2013-01-01
To review recent advance in the application and research of three-dimensional digital knee model. The recent original articles about three-dimensional digital knee model were extensively reviewed and analyzed. The digital three-dimensional knee model can simulate the knee complex anatomical structure very well. Based on this, there are some developments of new software and techniques, and good clinical results are achieved. With the development of computer techniques and software, the knee repair and reconstruction procedure has been improved, the operation will be more simple and its accuracy will be further improved.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Moore, Ashley
2005-01-01
The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.
GRID: a high-resolution protein structure refinement algorithm.
Chitsaz, Mohsen; Mayo, Stephen L
2013-03-05
The energy-based refinement of protein structures generated by fold prediction algorithms to atomic-level accuracy remains a major challenge in structural biology. Energy-based refinement is mainly dependent on two components: (1) sufficiently accurate force fields, and (2) efficient conformational space search algorithms. Focusing on the latter, we developed a high-resolution refinement algorithm called GRID. It takes a three-dimensional protein structure as input and, using an all-atom force field, attempts to improve the energy of the structure by systematically perturbing backbone dihedrals and side-chain rotamer conformations. We compare GRID to Backrub, a stochastic algorithm that has been shown to predict a significant fraction of the conformational changes that occur with point mutations. We applied GRID and Backrub to 10 high-resolution (≤ 2.8 Å) crystal structures from the Protein Data Bank and measured the energy improvements obtained and the computation times required to achieve them. GRID resulted in energy improvements that were significantly better than those attained by Backrub while expending about the same amount of computational resources. GRID resulted in relaxed structures that had slightly higher backbone RMSDs compared to Backrub relative to the starting crystal structures. The average RMSD was 0.25 ± 0.02 Å for GRID versus 0.14 ± 0.04 Å for Backrub. These relatively minor deviations indicate that both algorithms generate structures that retain their original topologies, as expected given the nature of the algorithms. Copyright © 2012 Wiley Periodicals, Inc.
Muscle categorization using PDF estimation and Naive Bayes classification.
Adel, Tameem M; Smith, Benn E; Stashuk, Daniel W
2012-01-01
The structure of motor unit potentials (MUPs) and their times of occurrence provide information about the motor units (MUs) that created them. As such, electromyographic (EMG) data can be used to categorize muscles as normal or suffering from a neuromuscular disease. Using pattern discovery (PD) allows clinicians to understand the rationale underlying a certain muscle characterization; i.e. it is transparent. Discretization is required in PD, which leads to some loss in accuracy. In this work, characterization techniques that are based on estimating probability density functions (PDFs) for each muscle category are implemented. Characterization probabilities of each motor unit potential train (MUPT) are obtained from these PDFs and then Bayes rule is used to aggregate the MUPT characterization probabilities to calculate muscle level probabilities. Even though this technique is not as transparent as PD, its accuracy is higher than the discrete PD. Ultimately, the goal is to use a technique that is based on both PDFs and PD and make it as transparent and as efficient as possible, but first it was necessary to thoroughly assess how accurate a fully continuous approach can be. Using gaussian PDF estimation achieved improvements in muscle categorization accuracy over PD and further improvements resulted from using feature value histograms to choose more representative PDFs; for instance, using log-normal distribution to represent skewed histograms.
Cicero, Mark Xavier; Whitfill, Travis; Overly, Frank; Baird, Janette; Walsh, Barbara; Yarzebski, Jorge; Riera, Antonio; Adelgais, Kathleen; Meckler, Garth D; Baum, Carl; Cone, David Christopher; Auerbach, Marc
2017-01-01
Paramedics and emergency medical technicians (EMTs) triage pediatric disaster victims infrequently. The objective of this study was to measure the effect of a multiple-patient, multiple-simulation curriculum on accuracy of pediatric disaster triage (PDT). Paramedics, paramedic students, and EMTs from three sites were enrolled. Triage accuracy was measured three times (Time 0, Time 1 [two weeks later], and Time 2 [6 months later]) during a disaster simulation, in which high and low fidelity manikins and actors portrayed 10 victims. Accuracy was determined by participant triage decision concordance with predetermined expected triage level (RED [Immediate], YELLOW [Delayed], GREEN [Ambulatory], BLACK [Deceased]) for each victim. Between Time 0 and Time 1, participants completed an interactive online module, and after each simulation there was an individual debriefing. Associations between participant level of training, years of experience, and enrollment site were determined, as were instances of the most dangerous mistriage, when RED and YELLOW victims were triaged BLACK. The study enrolled 331 participants, and the analysis included 261 (78.9%) participants who completed the study, 123 from the Connecticut site, 83 from Rhode Island, and 55 from Massachusetts. Triage accuracy improved significantly from Time 0 to Time 1, after the educational interventions (first simulation with debriefing, and an interactive online module), with a median 10% overall improvement (p < 0.001). Subgroup analyses showed between Time 0 and Time 1, paramedics and paramedic students improved more than EMTs (p = 0.002). Analysis of triage accuracy showed greatest improvement in overall accuracy for YELLOW triage patients (Time 0 50% accurate, Time1 100%), followed by RED patients (Time 0 80%, Time 1 100%). There was no significant difference in accuracy between Time 1 and Time 2 (p = 0.073). This study shows that the multiple-victim, multiple-simulation curriculum yields a durable 10% improvement in simulated triage accuracy. Future iterations of the curriculum can target greater improvements in EMT triage accuracy.
Edla, Damodar Reddy; Kuppili, Venkatanareshbabu; Dharavath, Ramesh; Beechu, Nareshkumar Reddy
2017-01-01
Low-power wearable devices for disease diagnosis are used at anytime and anywhere. These are non-invasive and pain-free for the better quality of life. However, these devices are resource constrained in terms of memory and processing capability. Memory constraint allows these devices to store a limited number of patterns and processing constraint provides delayed response. It is a challenging task to design a robust classification system under above constraints with high accuracy. In this Letter, to resolve this problem, a novel architecture for weightless neural networks (WNNs) has been proposed. It uses variable sized random access memories to optimise the memory usage and a modified binary TRIE data structure for reducing the test time. In addition, a bio-inspired-based genetic algorithm has been employed to improve the accuracy. The proposed architecture is experimented on various disease datasets using its software and hardware realisations. The experimental results prove that the proposed architecture achieves better performance in terms of accuracy, memory saving and test time as compared to standard WNNs. It also outperforms in terms of accuracy as compared to conventional neural network-based classifiers. The proposed architecture is a powerful part of most of the low-power wearable devices for the solution of memory, accuracy and time issues. PMID:28868148
Accurate prediction of RNA-binding protein residues with two discriminative structural descriptors.
Sun, Meijian; Wang, Xia; Zou, Chuanxin; He, Zenghui; Liu, Wei; Li, Honglin
2016-06-07
RNA-binding proteins participate in many important biological processes concerning RNA-mediated gene regulation, and several computational methods have been recently developed to predict the protein-RNA interactions of RNA-binding proteins. Newly developed discriminative descriptors will help to improve the prediction accuracy of these prediction methods and provide further meaningful information for researchers. In this work, we designed two structural features (residue electrostatic surface potential and triplet interface propensity) and according to the statistical and structural analysis of protein-RNA complexes, the two features were powerful for identifying RNA-binding protein residues. Using these two features and other excellent structure- and sequence-based features, a random forest classifier was constructed to predict RNA-binding residues. The area under the receiver operating characteristic curve (AUC) of five-fold cross-validation for our method on training set RBP195 was 0.900, and when applied to the test set RBP68, the prediction accuracy (ACC) was 0.868, and the F-score was 0.631. The good prediction performance of our method revealed that the two newly designed descriptors could be discriminative for inferring protein residues interacting with RNAs. To facilitate the use of our method, a web-server called RNAProSite, which implements the proposed method, was constructed and is freely available at http://lilab.ecust.edu.cn/NABind .
Gradient field of undersea sound speed structure extracted from the GNSS-A oceanography
NASA Astrophysics Data System (ADS)
Yokota, Yusuke; Ishikawa, Tadashi; Watanabe, Shun-ichi
2018-06-01
After the twenty-first century, the Global Navigation Satellite System-Acoustic ranging (GNSS-A) technique detected geodetic events such as co- and postseismic effects following the 2011 Tohoku-oki earthquake and slip-deficit rate distributions along the Nankai Trough subduction zone. Although these are extremely important discoveries in geodesy and seismology, more accurate observation that can capture temporal and spatial changes are required for future earthquake disaster prevention. In order to upgrade the accuracy of the GNSS-A technique, it is necessary to understand disturbances in undersea sound speed structures, which are major error sources. In particular, detailed temporal and spatial variations are difficult to observe accurately, and their effect was not sufficiently extracted in previous studies. In the present paper, we reconstruct an inversion scheme for extracting the effect from GNSS-A data and experimentally apply this scheme to the seafloor sites around the Kuroshio. The extracted gradient effects are believed to represent not only a broad sound speed structure but also a more detailed structure generated in the unsteady disturbance. The accuracy of the seafloor positioning was also improved by this new method. The obtained results demonstrate the feasibility of using the GNSS-A technique to detect a seafloor crustal deformation for oceanography research.
Improving consensus contact prediction via server correlation reduction.
Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming
2009-05-06
Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.
Performing label-fusion-based segmentation using multiple automatically generated templates.
Chakravarty, M Mallar; Steadman, Patrick; van Eede, Matthijs C; Calcott, Rebecca D; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D Louis; Lerch, Jason P
2013-10-01
Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). Copyright © 2012 Wiley Periodicals, Inc.
Schmaal, Lianne; Marquand, Andre F; Rhebergen, Didi; van Tol, Marie-José; Ruhé, Henricus G; van der Wee, Nic J A; Veltman, Dick J; Penninx, Brenda W J H
2015-08-15
A chronic course of major depressive disorder (MDD) is associated with profound alterations in brain volumes and emotional and cognitive processing. However, no neurobiological markers have been identified that prospectively predict MDD course trajectories. This study evaluated the prognostic value of different neuroimaging modalities, clinical characteristics, and their combination to classify MDD course trajectories. One hundred eighteen MDD patients underwent structural and functional magnetic resonance imaging (MRI) (emotional facial expressions and executive functioning) and were clinically followed-up at 2 years. Three MDD trajectories (chronic n = 23, gradual improving n = 36, and fast remission n = 59) were identified based on Life Chart Interview measuring the presence of symptoms each month. Gaussian process classifiers were employed to evaluate prognostic value of neuroimaging data and clinical characteristics (including baseline severity, duration, and comorbidity). Chronic patients could be discriminated from patients with more favorable trajectories from neural responses to various emotional faces (up to 73% accuracy) but not from structural MRI and functional MRI related to executive functioning. Chronic patients could also be discriminated from remitted patients based on clinical characteristics (accuracy 69%) but not when age differences between the groups were taken into account. Combining different task contrasts or data sources increased prediction accuracies in some but not all cases. Our findings provide evidence that the prediction of naturalistic course of depression over 2 years is improved by considering neuroimaging data especially derived from neural responses to emotional facial expressions. Neural responses to emotional salient faces more accurately predicted outcome than clinical data. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Devine, Emily Beth; Van Eaton, Erik; Zadworny, Megan E; Symons, Rebecca; Devlin, Allison; Yanez, David; Yetisgen, Meliha; Keyloun, Katelyn R; Capurro, Daniel; Alfonso-Cristancho, Rafael; Flum, David R; Tarczy-Hornoch, Peter
2018-05-22
The availability of high fidelity electronic health record (EHR) data is a hallmark of the learning health care system. Washington State's Surgical Care Outcomes and Assessment Program (SCOAP) is a network of hospitals participating in quality improvement (QI) registries wherein data are manually abstracted from EHRs. To create the Comparative Effectiveness Research and Translation Network (CERTAIN), we semi-automated SCOAP data abstraction using a centralized federated data model, created a central data repository (CDR), and assessed whether these data could be used as real world evidence for QI and research. Describe the validation processes and complexities involved and lessons learned. Investigators installed a commercial CDR to retrieve and store data from disparate EHRs. Manual and automated abstraction systems were conducted in parallel (10/2012-7/2013) and validated in three phases using the EHR as the gold standard: 1) ingestion, 2) standardization, and 3) concordance of automated versus manually abstracted cases. Information retrieval statistics were calculated. Four unaffiliated health systems provided data. Between 6 and 15 percent of data elements were abstracted: 51 to 86 percent from structured data; the remainder using natural language processing (NLP). In phase 1, data ingestion from 12 out of 20 feeds reached 95 percent accuracy. In phase 2, 55 percent of structured data elements performed with 96 to 100 percent accuracy; NLP with 89 to 91 percent accuracy. In phase 3, concordance ranged from 69 to 89 percent. Information retrieval statistics were consistently above 90 percent. Semi-automated data abstraction may be useful, although raw data collected as a byproduct of health care delivery is not immediately available for use as real world evidence. New approaches to gathering and analyzing extant data are required.
Cadastral Database Positional Accuracy Improvement
NASA Astrophysics Data System (ADS)
Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.
2017-10-01
Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.
Reis, Henning; Pütter, Carolin; Megger, Dominik A; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-C; Bertram, Stefanie; Wohlschläger, Jeremias; Hagemann, Sascha; Eisenacher, Martin; Scherag, André; Schlaak, Jörg F; Canbay, Ali; Meyer, Helmut E; Sitek, Barbara; Baba, Hideo A
2015-06-01
Hepatocellular carcinoma (HCC) is a major lethal cancer worldwide. Despite sophisticated diagnostic algorithms, the differential diagnosis of small liver nodules still is difficult. While imaging techniques have advanced, adjuvant protein-biomarkers as glypican3 (GPC3), glutamine-synthetase (GS) and heat-shock protein 70 (HSP70) have enhanced diagnostic accuracy. The aim was to further detect useful protein-biomarkers of HCC with a structured systematic approach using differential proteome techniques, bring the results to practical application and compare the diagnostic accuracy of the candidates with the established biomarkers. After label-free and gel-based proteomics (n=18 HCC/corresponding non-tumorous liver tissue (NTLT)) biomarker candidates were tested for diagnostic accuracy in immunohistochemical analyses (n=14 HCC/NTLT). Suitable candidates were further tested for consistency in comparison to known protein-biomarkers in HCC (n=78), hepatocellular adenoma (n=25; HCA), focal nodular hyperplasia (n=28; FNH) and cirrhosis (n=28). Of all protein-biomarkers, 14-3-3Sigma (14-3-3S) exhibited the most pronounced up-regulation (58.8×) in proteomics and superior diagnostic accuracy (73.0%) in the differentiation of HCC from non-tumorous hepatocytes also compared to established biomarkers as GPC3 (64.7%) and GS (45.4%). 14-3-3S was part of the best diagnostic three-biomarker panel (GPC3, HSP70, 14-3-3S) for the differentiation of HCC and HCA which is of most important significance. Exclusion of GS and inclusion of 14-3-3S in the panel (>1 marker positive) resulted in a profound increase in specificity (+44.0%) and accuracy (+11.0%) while sensitivity remained stable (96.0%). 14-3-3S is an interesting protein biomarker with the potential to further improve the accuracy of differential diagnostic process of hepatocellular tumors. This article is part of a Special Issue entitled: Medical Proteomics. Copyright © 2014 Elsevier B.V. All rights reserved.
DiLibero, Justin; O'Donoghue, Sharon C; DeSanto-Madeya, Susan; Felix, Janice; Ninobla, Annalyn; Woods, Allison
2016-01-01
Delirium occurs in up to 80% of intensive care unit (ICU) patients. Despite its prevalence in this population, there continues to be inaccuracies in delirium assessments. In the absence of accurate delirium assessments, delirium in critically ill ICU patients will remain unrecognized and will lead to negative clinical and organizational outcomes. The goal of this quality improvement project was to facilitate sustained improvement in the accuracy of delirium assessments among all ICU patients including those who were sedate or agitated. A pretest-posttest design was used to evaluate the effectiveness of a program to improve the accuracy of delirium screenings among patients admitted to a medical ICU or coronary care unit. Two hundred thirty-six delirium assessment audits were completed during the baseline period and 535 during the postintervention period. Compliance with performing at least 1 delirium assessment every shift was 85% at baseline and improved to 99% during the postintervention period. Baseline assessment accuracy was 70.31% among all patients and 53.49% among sedate and agitated patients. Postintervention assessment accuracy improved to 95.51% for all patients and 89.23% among sedate and agitated patients. The results from this project suggest the effectiveness of the program in improving assessment accuracy among difficult-to-assess patients. Further research is needed to demonstrate the effectiveness of this model across other critical care units, patient populations, and organizations.
Benge, James; Beach, Thomas; Gladding, Connie; Maestas, Gail
2008-01-01
The Military Health System (MHS) deployed its electronic health record (EHR), AHLTA to Military Treatment Facilities (MTFs) around the world. This paper focuses on the approach and barriers to using structured text in AHLTA to document care encounters and illustrates the direct correlation between the use of structured text and achievement of expected benefits. AHLTA uses commercially available products, a health data dictionary and standardized medical terminology, enabling the capture of structured computable data. With structured text stored in the AHLTA Clinical Data Repository (CDR), the MHS has seen a return on its EHR investment with improvements in the accuracy and completeness of coding and the documentation of care provided. Determining the aspects of documentation where structured text is most beneficial, as well as the degree of structured text needed has been a significant challenge. This paper describes how the economic value framework aligns the enterprise strategic objectives with the EHR investment features, performance metrics and expected benefits. The framework analyses focus on return on investment calculations, baseline assessment and post-implementation benefits validation. Cost avoidance, revenue enhancements and operational improvements, such as evidence-based medicine and medical surveillance can be directly attributed to use structured text.
Concept Mapping Improves Metacomprehension Accuracy among 7th Graders
ERIC Educational Resources Information Center
Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.
2012-01-01
Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…
Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polly, B.
2011-09-01
This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.
SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, C; Chan, S; Lee, F
Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done withmore » FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.« less
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.
2014-12-01
Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.
Accuracy improvement of interferometric Rayleigh scattering diagnostic
NASA Astrophysics Data System (ADS)
Yan, Bo; Chen, Li; Yin, Kewei; Chen, Shuang; Yang, Furong; Tu, Xiaobo
2017-10-01
Cavity structure is used to increase the Interferometric Rayleigh scattering signal intensity. By using ZEMAX method, we simulate a special cavity mode comprising two spherical reflectors with different size, including the focal length and the diameter. The simulations suggest that the parallel beam can reflect repeatedly in the resonant cavity and concentrate on the focus. Besides, the reflection times and the ray width can reach about 50 and 2.1 cm after some feasible solutions.
Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore; Bounds, John Alan; Allander, Krag
2002-01-01
The apparatus and method provide techniques through which both alpha and beta emission determinations can be made simultaneously using a simple detector structure. The technique uses a beta detector covered in an electrically conducting material, the electrically conducting material discharging ions generated by alpha emissions, and as a consequence providing a measure of those alpha emissions. The technique also offers improved mountings for alpha detectors and other forms of detectors against vibration and the consequential effects vibration has on measurement accuracy.
Collaborative Research: Catalog Completeness and Accuracy
2013-01-01
catalogue for the region stretching from Saudi Arabia to western China for 1995 to the present. We have used all available data sources which includes...in Kyrgyzstan 9/97 to 8/00 (PASSCAL experiment) 4 stations in China 6/98 to 8/00 11 stations in China 6199 to 8/00 18 stations in Kyrgyzstan 7/99...automated, high- precision repicking to improve delineation of microseismic structures at the Soultz geothermal reservoir, Pure Appl. Geophys., 159, 563
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results. PMID:25276856
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results.
Cheng, Han-miao; Li, Hong-bin
2015-08-01
The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy class 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.
Understanding the delayed-keyword effect on metacomprehension accuracy.
Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer
2005-11-01
The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.
Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Fabik, Richard H.
1993-01-01
The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.
Do Convolutional Neural Networks Learn Class Hierarchy?
Bilal, Alsallakh; Jourabloo, Amin; Ye, Mao; Liu, Xiaoming; Ren, Liu
2018-01-01
Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.
DOT National Transportation Integrated Search
2015-07-01
Implementing the recommendations of this study is expected to significantly : improve the accuracy of camber measurements and predictions and to : ultimately help reduce construction delays, improve bridge serviceability, : and decrease costs.
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-09-03
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments.
On estimating the accuracy of monitoring methods using Bayesian error propagation technique
NASA Astrophysics Data System (ADS)
Zonta, Daniele; Bruschetta, Federico; Cappello, Carlo; Zandonini, R.; Pozzi, Matteo; Wang, Ming; Glisic, B.; Inaudi, D.; Posenato, D.; Zhao, Y.
2014-04-01
This paper illustrates an application of Bayesian logic to monitoring data analysis and structural condition state inference. The case study is a 260 m long cable-stayed bridge spanning the Adige River 10 km north of the town of Trento, Italy. This is a statically indeterminate structure, having a composite steel-concrete deck, supported by 12 stay cables. Structural redundancy, possible relaxation losses and an as-built condition differing from design, suggest that long-term load redistribution between cables can be expected. To monitor load redistribution, the owner decided to install a monitoring system which combines built-on-site elasto-magnetic and fiber-optic sensors. In this note, we discuss a rational way to improve the accuracy of the load estimate from the EM sensors taking advantage of the FOS information. More specifically, we use a multi-sensor Bayesian data fusion approach which combines the information from the two sensing systems with the prior knowledge, including design information and the outcomes of laboratory calibration. Using the data acquired to date, we demonstrate that combining the two measurements allows a more accurate estimate of the cable load, to better than 50 kN.
Huang, Junhui; Xue, Qi; Wang, Zhao; Gao, Jianmin
2016-01-01
While color-coding methods have improved the measuring efficiency of a structured light three-dimensional (3D) measurement system, they decreased the measuring accuracy significantly due to lateral chromatic aberration (LCA). In this study, the LCA in a structured light measurement system is analyzed, and a method is proposed to compensate the error caused by the LCA. Firstly, based on the projective transformation, a 3D error map of LCA is constructed in the projector images by using a flat board and comparing the image coordinates of red, green and blue circles with the coordinates of white circles at preselected sample points within the measurement volume. The 3D map consists of the errors, which are the equivalent errors caused by LCA of the camera and projector. Then in measurements, error values of LCA are calculated and compensated to correct the projector image coordinates through the 3D error map and a tri-linear interpolation method. Eventually, 3D coordinates with higher accuracy are re-calculated according to the compensated image coordinates. The effectiveness of the proposed method is verified in the following experiments. PMID:27598174
Feature Selection Using Information Gain for Improved Structural-Based Alert Correlation
Siraj, Maheyzah Md; Zainal, Anazida; Elshoush, Huwaida Tagelsir; Elhaj, Fatin
2016-01-01
Grouping and clustering alerts for intrusion detection based on the similarity of features is referred to as structurally base alert correlation and can discover a list of attack steps. Previous researchers selected different features and data sources manually based on their knowledge and experience, which lead to the less accurate identification of attack steps and inconsistent performance of clustering accuracy. Furthermore, the existing alert correlation systems deal with a huge amount of data that contains null values, incomplete information, and irrelevant features causing the analysis of the alerts to be tedious, time-consuming and error-prone. Therefore, this paper focuses on selecting accurate and significant features of alerts that are appropriate to represent the attack steps, thus, enhancing the structural-based alert correlation model. A two-tier feature selection method is proposed to obtain the significant features. The first tier aims at ranking the subset of features based on high information gain entropy in decreasing order. The second tier extends additional features with a better discriminative ability than the initially ranked features. Performance analysis results show the significance of the selected features in terms of the clustering accuracy using 2000 DARPA intrusion detection scenario-specific dataset. PMID:27893821
NASA Astrophysics Data System (ADS)
Oda, A.; Yamaotsu, N.; Hirono, S.; Takano, Y.; Fukuyoshi, S.; Nakagaki, R.; Takahashi, O.
2013-08-01
CAMDAS is a conformational search program, through which high temperature molecular dynamics (MD) calculations are carried out. In this study, the conformational search ability of CAMDAS was evaluated using structurally known 281 protein-ligand complexes as a test set. For the test, the influences of initial settings and initial conformations on search results were validated. By using the CAMDAS program, reasonable conformations whose root mean square deviations (RMSDs) in comparison with crystal structures were less than 2.0 Å could be obtained from 96% of the test set even though the worst initial settings were used. The success rate was comparable to those of OMEGA, and the errors of CAMDAS were less than those of OMEGA. Based on the results obtained using CAMDAS, the worst RMSD was around 2.5 Å, although the worst value obtained was around 4.0 Å using OMEGA. The results indicated that CAMDAS is a robust and versatile conformational search method and that it can be used for a wide variety of small molecules. In addition, the accuracy of a conformational search in relation to this study was improved by longer MD calculations and multiple MD simulations.
Heffernan, Rhys; Yang, Yuedong; Paliwal, Kuldip; Zhou, Yaoqi
2017-09-15
The accuracy of predicting protein local and global structural properties such as secondary structure and solvent accessible surface area has been stagnant for many years because of the challenge of accounting for non-local interactions between amino acid residues that are close in three-dimensional structural space but far from each other in their sequence positions. All existing machine-learning techniques relied on a sliding window of 10-20 amino acid residues to capture some 'short to intermediate' non-local interactions. Here, we employed Long Short-Term Memory (LSTM) Bidirectional Recurrent Neural Networks (BRNNs) which are capable of capturing long range interactions without using a window. We showed that the application of LSTM-BRNN to the prediction of protein structural properties makes the most significant improvement for residues with the most long-range contacts (|i-j| >19) over a previous window-based, deep-learning method SPIDER2. Capturing long-range interactions allows the accuracy of three-state secondary structure prediction to reach 84% and the correlation coefficient between predicted and actual solvent accessible surface areas to reach 0.80, plus a reduction of 5%, 10%, 5% and 10% in the mean absolute error for backbone ϕ , ψ , θ and τ angles, respectively, from SPIDER2. More significantly, 27% of 182724 40-residue models directly constructed from predicted C α atom-based θ and τ have similar structures to their corresponding native structures (6Å RMSD or less), which is 3% better than models built by ϕ and ψ angles. We expect the method to be useful for assisting protein structure and function prediction. The method is available as a SPIDER3 server and standalone package at http://sparks-lab.org . yaoqi.zhou@griffith.edu.au or yuedong.yang@griffith.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity
Lee, Hui Sun; Im, Wonpil
2013-01-01
Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286
NASA Astrophysics Data System (ADS)
Xia, Liang; Liu, Weiguo; Lv, Xiaojiang; Gu, Xianguang
2018-04-01
The structural crashworthiness design of vehicles has become an important research direction to ensure the safety of the occupants. To effectively improve the structural safety of a vehicle in a frontal crash, a system methodology is presented in this study. The surrogate model of Online support vector regression (Online-SVR) is adopted to approximate crashworthiness criteria and different kernel functions are selected to enhance the accuracy of the model. The Online-SVR model is demonstrated to have the advantages of solving highly nonlinear problems and saving training costs, and can effectively be applied for vehicle structural crashworthiness design. By combining the non-dominated sorting genetic algorithm II and Monte Carlo simulation, both deterministic optimization and reliability-based design optimization (RBDO) are conducted. The optimization solutions are further validated by finite element analysis, which shows the effectiveness of the RBDO solution in the structural crashworthiness design process. The results demonstrate the advantages of using RBDO, resulting in not only increased energy absorption and decreased structural weight from a baseline design, but also a significant improvement in the reliability of the design.
Brender, Jeffrey R.; Zhang, Yang
2015-01-01
The formation of protein-protein complexes is essential for proteins to perform their physiological functions in the cell. Mutations that prevent the proper formation of the correct complexes can have serious consequences for the associated cellular processes. Since experimental determination of protein-protein binding affinity remains difficult when performed on a large scale, computational methods for predicting the consequences of mutations on binding affinity are highly desirable. We show that a scoring function based on interface structure profiles collected from analogous protein-protein interactions in the PDB is a powerful predictor of protein binding affinity changes upon mutation. As a standalone feature, the differences between the interface profile score of the mutant and wild-type proteins has an accuracy equivalent to the best all-atom potentials, despite being two orders of magnitude faster once the profile has been constructed. Due to its unique sensitivity in collecting the evolutionary profiles of analogous binding interactions and the high speed of calculation, the interface profile score has additional advantages as a complementary feature to combine with physics-based potentials for improving the accuracy of composite scoring approaches. By incorporating the sequence-derived and residue-level coarse-grained potentials with the interface structure profile score, a composite model was constructed through the random forest training, which generates a Pearson correlation coefficient >0.8 between the predicted and observed binding free-energy changes upon mutation. This accuracy is comparable to, or outperforms in most cases, the current best methods, but does not require high-resolution full-atomic models of the mutant structures. The binding interface profiling approach should find useful application in human-disease mutation recognition and protein interface design studies. PMID:26506533
NASA Astrophysics Data System (ADS)
Shokravi, H.; Bakhary, NH
2017-11-01
Subspace System Identification (SSI) is considered as one of the most reliable tools for identification of system parameters. Performance of a SSI scheme is considerably affected by the structure of the associated identification algorithm. Weight matrix is a variable in SSI that is used to reduce the dimensionality of the state-space equation. Generally one of the weight matrices of Principle Component (PC), Unweighted Principle Component (UPC) and Canonical Variate Analysis (CVA) are used in the structure of a SSI algorithm. An increasing number of studies in the field of structural health monitoring are using SSI for damage identification. However, studies that evaluate the performance of the weight matrices particularly in association with accuracy, noise resistance, and time complexity properties are very limited. In this study, the accuracy, noise-robustness, and time-efficiency of the weight matrices are compared using different qualitative and quantitative metrics. Three evaluation metrics of pole analysis, fit values and elapsed time are used in the assessment process. A numerical model of a mass-spring-dashpot and operational data is used in this research paper. It is observed that the principal components obtained using PC algorithms are more robust against noise uncertainty and give more stable results for the pole distribution. Furthermore, higher estimation accuracy is achieved using UPC algorithm. CVA had the worst performance for pole analysis and time efficiency analysis. The superior performance of the UPC algorithm in the elapsed time is attributed to using unit weight matrices. The obtained results demonstrated that the process of reducing dimensionality in CVA and PC has not enhanced the time efficiency but yield an improved modal identification in PC.
A theoretical and experimental benchmark study of core-excited states in nitrogen
NASA Astrophysics Data System (ADS)
Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; Nandi, Saikat; Coriani, Sonia; Gühr, Markus; Koch, Henrik
2018-02-01
The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. The computational results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.
Application of the PM6 method to modeling the solid state
2008-01-01
The applicability of the recently developed PM6 method for modeling various properties of a wide range of organic and inorganic crystalline solids has been investigated. Although the geometries of most systems examined were reproduced with good accuracy, severe errors were found in the predicted structures of a small number of solids. The origin of these errors was investigated, and a strategy for improving the method proposed. Figure Detail of Structure of Dihydrogen Phosphate in KH2PO4 (upper pair) and in (CH3)4NH2PO4. (Footnote): X-ray structures on left, PM6 structure on right. Electronic supplementary material The online version of this article (doi:10.1007/s00894-008-0299-7) contains supplementary material, which is available to authorized users. PMID:18449579
Nanosensitive optical coherence tomography for the study of changes in static and dynamic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, S; Subhash, H; Leahy, M
2014-07-31
We briefly discuss the principle of image formation in Fourier domain optical coherence tomography (OCT). The theory of a new approach to improve dramatically the sensitivity of conventional OCT is described. The approach is based on spectral encoding of spatial frequency. Information about the spatial structure is directly translated from the Fourier domain to the image domain as different wavelengths, without compromising the accuracy. Axial spatial period profiles of the structure are reconstructed for any volume of interest within the 3D OCT image with nanoscale sensitivity. An example of application of the nanoscale OCT to probe the internal structure ofmore » medico-biological objects, the anterior chamber of an ex vivo rat eye, is demonstrated. (laser biophotonics)« less
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2016-01-01
In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.
Carroll, John A; Smith, Helen E; Scott, Donia; Cassell, Jackie A
2016-01-01
Background Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall). PMID:26911811
Berlin, Konstantin; Longhini, Andrew; Dayie, T Kwaku; Fushman, David
2013-12-01
To facilitate rigorous analysis of molecular motions in proteins, DNA, and RNA, we present a new version of ROTDIF, a program for determining the overall rotational diffusion tensor from single- or multiple-field nuclear magnetic resonance relaxation data. We introduce four major features that expand the program's versatility and usability. The first feature is the ability to analyze, separately or together, (13)C and/or (15)N relaxation data collected at a single or multiple fields. A significant improvement in the accuracy compared to direct analysis of R2/R1 ratios, especially critical for analysis of (13)C relaxation data, is achieved by subtracting high-frequency contributions to relaxation rates. The second new feature is an improved method for computing the rotational diffusion tensor in the presence of biased errors, such as large conformational exchange contributions, that significantly enhances the accuracy of the computation. The third new feature is the integration of the domain alignment and docking module for relaxation-based structure determination of multi-domain systems. Finally, to improve accessibility to all the program features, we introduced a graphical user interface that simplifies and speeds up the analysis of the data. Written in Java, the new ROTDIF can run on virtually any computer platform. In addition, the new ROTDIF achieves an order of magnitude speedup over the previous version by implementing a more efficient deterministic minimization algorithm. We not only demonstrate the improvement in accuracy and speed of the new algorithm for synthetic and experimental (13)C and (15)N relaxation data for several proteins and nucleic acids, but also show that careful analysis required especially for characterizing RNA dynamics allowed us to uncover subtle conformational changes in RNA as a function of temperature that were opaque to previous analysis.
A comprehensive comparison of network similarities for link prediction and spurious link elimination
NASA Astrophysics Data System (ADS)
Zhang, Peng; Qiu, Dan; Zeng, An; Xiao, Jinghua
2018-06-01
Identifying missing interactions in complex networks, known as link prediction, is realized by estimating the likelihood of the existence of a link between two nodes according to the observed links and nodes' attributes. Similar approaches have also been employed to identify and remove spurious links in networks which is crucial for improving the reliability of network data. In network science, the likelihood for two nodes having a connection strongly depends on their structural similarity. The key to address these two problems thus becomes how to objectively measure the similarity between nodes in networks. In the literature, numerous network similarity metrics have been proposed and their accuracy has been discussed independently in previous works. In this paper, we systematically compare the accuracy of 18 similarity metrics in both link prediction and spurious link elimination when the observed networks are very sparse or consist of inaccurate linking information. Interestingly, some methods have high prediction accuracy, they tend to perform low accuracy in identification spurious interaction. We further find that methods can be classified into several cluster according to their behaviors. This work is useful for guiding future use of these similarity metrics for different purposes.
Biomolecularmodeling and simulation: a field coming of age
Schlick, Tamar; Collepardo-Guevara, Rosana; Halvorsen, Leif Arthur; Jung, Segun; Xiao, Xia
2013-01-01
We assess the progress in biomolecular modeling and simulation, focusing on structure prediction and dynamics, by presenting the field’s history, metrics for its rise in popularity, early expressed expectations, and current significant applications. The increases in computational power combined with improvements in algorithms and force fields have led to considerable success, especially in protein folding, specificity of ligand/biomolecule interactions, and interpretation of complex experimental phenomena (e.g. NMR relaxation, protein-folding kinetics and multiple conformational states) through the generation of structural hypotheses and pathway mechanisms. Although far from a general automated tool, structure prediction is notable for proteins and RNA that preceded the experiment, especially by knowledge-based approaches. Thus, despite early unrealistic expectations and the realization that computer technology alone will not quickly bridge the gap between experimental and theoretical time frames, ongoing improvements to enhance the accuracy and scope of modeling and simulation are propelling the field onto a productive trajectory to become full partner with experiment and a field on its own right. PMID:21226976
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
23 CFR 1200.22 - State traffic safety information system improvements grants.
Code of Federal Regulations, 2013 CFR
2013-04-01
... measures to be used to demonstrate quantitative progress in the accuracy, completeness, timeliness... to implement, provides an explanation. (d) Requirement for quantitative improvement. A State shall demonstrate quantitative improvement in the data attributes of accuracy, completeness, timeliness, uniformity...
23 CFR 1200.22 - State traffic safety information system improvements grants.
Code of Federal Regulations, 2014 CFR
2014-04-01
... measures to be used to demonstrate quantitative progress in the accuracy, completeness, timeliness... to implement, provides an explanation. (d) Requirement for quantitative improvement. A State shall demonstrate quantitative improvement in the data attributes of accuracy, completeness, timeliness, uniformity...
Computational wave dynamics for innovative design of coastal structures
GOTOH, Hitoshi; OKAYASU, Akio
2017-01-01
For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506
NASA Astrophysics Data System (ADS)
Poyatos, Rafael; Sus, Oliver; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi
2018-05-01
The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density) in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2). We simulated gaps at different missingness levels (10-80 %) in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN), ordinary and regression kriging, and multivariate imputation using chained equations (MICE) to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %), species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables allowed us to fill the gaps in the IEFC incomplete dataset (5495 plots) and quantify imputation uncertainty. Resulting spatial patterns of the studied traits in Catalan forests were broadly similar when using species means, regression kriging or the best-performing MICE application, but some important discrepancies were observed at the local level. Our results highlight the need to assess imputation quality beyond just imputation accuracy and show that including environmental information in statistical imputation approaches yields more plausible imputations in spatially explicit plant trait datasets.
Li, Min; Zhang, John Z H
2017-02-14
A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.
Protein simulation using coarse-grained two-bead multipole force field with polarizable water models
NASA Astrophysics Data System (ADS)
Li, Min; Zhang, John Z. H.
2017-02-01
A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
NASA Astrophysics Data System (ADS)
Gupta, Shaurya; Guha, Daipayan; Jakubovic, Raphael; Yang, Victor X. D.
2017-02-01
Computer-assisted navigation is used by surgeons in spine procedures to guide pedicle screws to improve placement accuracy and in some cases, to better visualize patient's underlying anatomy. Intraoperative registration is performed to establish a correlation between patient's anatomy and the pre/intra-operative image. Current algorithms rely on seeding points obtained directly from the exposed spinal surface to achieve clinically acceptable registration accuracy. Registration of these three dimensional surface point-clouds are prone to various systematic errors. The goal of this study was to evaluate the robustness of surgical navigation systems by looking at the relationship between the optical density of an acquired 3D point-cloud and the corresponding surgical navigation error. A retrospective review of a total of 48 registrations performed using an experimental structured light navigation system developed within our lab was conducted. For each registration, the number of points in the acquired point cloud was evaluated relative to whether the registration was acceptable, the corresponding system reported error and target registration error. It was demonstrated that the number of points in the point cloud neither correlates with the acceptance/rejection of a registration or the system reported error. However, a negative correlation was observed between the number of the points in the point-cloud and the corresponding sagittal angular error. Thus, system reported total registration points and accuracy are insufficient to gauge the accuracy of a navigation system and the operating surgeon must verify and validate registration based on anatomical landmarks prior to commencing surgery.
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, G. F.; Banerjee, P. K.; Shi, Y.
1990-01-01
A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.
G3X-K theory: A composite theoretical method for thermochemical kinetics
NASA Astrophysics Data System (ADS)
da Silva, Gabriel
2013-02-01
A composite theoretical method for accurate thermochemical kinetics, G3X-K, is described. This method is accurate to around 0.5 kcal mol-1 for barrier heights and 0.8 kcal mol-1 for enthalpies of formation. G3X-K is a modification of G3SX theory using the M06-2X density functional for structures and zero-point energies and parameterized for a test set of 223 heats of formation and 23 barrier heights. A reduced perturbation-order variant, G3X(MP3)-K, is also developed, providing around 0.7 kcal mol-1 accuracy for barrier heights and 0.9 kcal mol-1 accuracy for enthalpies, at reduced computational cost. Some opportunities to further improve Gn composite methods are identified and briefly discussed.
Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions
NASA Astrophysics Data System (ADS)
Khoury, Mehdi; Liu, Honghai
This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.
Smart drug release systems based on stimuli-responsive polymers.
Qing, Guangyan; Li, Minmin; Deng, Lijing; Lv, Ziyu; Ding, Peng; Sun, Taolei
2013-07-01
Stimuli-responsive polymers could respond to external stimuli, such as temperature, pH, photo-irradiation, electric field, biomolecules in solution, etc., which further induce reversible transformations in the structures and conformations of polymers, providing an excellent platform for controllable drug release, while the accuracy of drug delivery could obtain obvious improvement in this system. In this review, recent progresses in the drug release systems based on stimuli-responsive polymers are summarized, in which drugs can be released in an intelligent mode with high accuracy and efficiency, while potential damages to normal cells and tissues can also be effectively prevented owing to the unique characteristics of materials. Moreover, we introduce some smart nanoparticles-polymers conjugates and drug release devices, which are especially suitable for the long-term sustained drug release.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Han, Houzeng; Xu, Tianhe; Wang, Jian
2016-01-01
Precise Point Positioning (PPP) makes use of the undifferenced pseudorange and carrier phase measurements with ionospheric-free (IF) combinations to achieve centimeter-level positioning accuracy. Conventionally, the IF ambiguities are estimated as float values. To improve the PPP positioning accuracy and shorten the convergence time, the integer phase clock model with between-satellites single-difference (BSSD) operation is used to recover the integer property. However, the continuity and availability of stand-alone PPP is largely restricted by the observation environment. The positioning performance will be significantly degraded when GPS operates under challenging environments, if less than five satellites are present. A commonly used approach is integrating a low cost inertial sensor to improve the positioning performance and robustness. In this study, a tightly coupled (TC) algorithm is implemented by integrating PPP with inertial navigation system (INS) using an Extended Kalman filter (EKF). The navigation states, inertial sensor errors and GPS error states are estimated together. The troposphere constrained approach, which utilizes external tropospheric delay as virtual observation, is applied to further improve the ambiguity-fixed height positioning accuracy, and an improved adaptive filtering strategy is implemented to improve the covariance modelling considering the realistic noise effect. A field vehicular test with a geodetic GPS receiver and a low cost inertial sensor was conducted to validate the improvement on positioning performance with the proposed approach. The results show that the positioning accuracy has been improved with inertial aiding. Centimeter-level positioning accuracy is achievable during the test, and the PPP/INS TC integration achieves a fast re-convergence after signal outages. For troposphere constrained solutions, a significant improvement for the height component has been obtained. The overall positioning accuracies of the height component are improved by 30.36%, 16.95% and 24.07% for three different convergence times, i.e., 60, 50 and 30 min, respectively. It shows that the ambiguity-fixed horizontal positioning accuracy has been significantly improved. When compared with the conventional PPP solution, it can be seen that position accuracies are improved by 19.51%, 61.11% and 23.53% for the north, east and height components, respectively, after one hour convergence through the troposphere constraint fixed PPP/INS with adaptive covariance model. PMID:27399721
Spatial correlation of shear-wave velocity within San Francisco Bay Sediments
Thompson, E.M.; Baise, L.G.; Kayen, R.E.
2006-01-01
Sediment properties are spatially variable at all scales, and this variability at smaller scales influences high frequency ground motions. We show that surface shear-wave velocity is highly correlated within San Francisco Bay Area sediments using shear-wave velocity measurements from 210 seismic cone penetration tests. We use this correlation to estimate the surface sediment velocity structure using geostatistics. We find that the variance of the estimated shear-wave velocity is reduced using ordinary kriging, and that including this velocity structure in 2D ground motion simulations of a moderate sized earthquake improves the accuracy of the synthetics. Copyright ASCE 2006.
Mapping Winter Wheat with Multi-Temporal SAR and Optical Images in an Urban Agricultural Region
Zhou, Tao; Pan, Jianjun; Zhang, Peiyu; Wei, Shanbao; Han, Tao
2017-01-01
Winter wheat is the second largest food crop in China. It is important to obtain reliable winter wheat acreage to guarantee the food security for the most populous country in the world. This paper focuses on assessing the feasibility of in-season winter wheat mapping and investigating potential classification improvement by using SAR (Synthetic Aperture Radar) images, optical images, and the integration of both types of data in urban agricultural regions with complex planting structures in Southern China. Both SAR (Sentinel-1A) and optical (Landsat-8) data were acquired, and classification using different combinations of Sentinel-1A-derived information and optical images was performed using a support vector machine (SVM) and a random forest (RF) method. The interference coherence and texture images were obtained and used to assess the effect of adding them to the backscatter intensity images on the classification accuracy. The results showed that the use of four Sentinel-1A images acquired before the jointing period of winter wheat can provide satisfactory winter wheat classification accuracy, with an F1 measure of 87.89%. The combination of SAR and optical images for winter wheat mapping achieved the best F1 measure–up to 98.06%. The SVM was superior to RF in terms of the overall accuracy and the kappa coefficient, and was faster than RF, while the RF classifier was slightly better than SVM in terms of the F1 measure. In addition, the classification accuracy can be effectively improved by adding the texture and coherence images to the backscatter intensity data. PMID:28587066
Enabling image fusion for a CT guided needle placement robot
NASA Astrophysics Data System (ADS)
Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.
2017-03-01
Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.
Park, Ji Eun; Park, Bumwoo; Kim, Ho Sung; Choi, Choong Gon; Jung, Seung Chai; Oh, Joo Young; Lee, Jae-Hong; Roh, Jee Hoon; Shim, Woo Hyun
2017-01-01
Objective To identify potential imaging biomarkers of Alzheimer's disease by combining brain cortical thickness (CThk) and functional connectivity and to validate this model's diagnostic accuracy in a validation set. Materials and Methods Data from 98 subjects was retrospectively reviewed, including a study set (n = 63) and a validation set from the Alzheimer's Disease Neuroimaging Initiative (n = 35). From each subject, data for CThk and functional connectivity of the default mode network was extracted from structural T1-weighted and resting-state functional magnetic resonance imaging. Cortical regions with significant differences between patients and healthy controls in the correlation of CThk and functional connectivity were identified in the study set. The diagnostic accuracy of functional connectivity measures combined with CThk in the identified regions was evaluated against that in the medial temporal lobes using the validation set and application of a support vector machine. Results Group-wise differences in the correlation of CThk and default mode network functional connectivity were identified in the superior temporal (p < 0.001) and supramarginal gyrus (p = 0.007) of the left cerebral hemisphere. Default mode network functional connectivity combined with the CThk of those two regions were more accurate than that combined with the CThk of both medial temporal lobes (91.7% vs. 75%). Conclusion Combining functional information with CThk of the superior temporal and supramarginal gyri in the left cerebral hemisphere improves diagnostic accuracy, making it a potential imaging biomarker for Alzheimer's disease. PMID:29089831
Identifying and reducing error in cluster-expansion approximations of protein energies.
Hahn, Seungsoo; Ashenberg, Orr; Grigoryan, Gevorg; Keating, Amy E
2010-12-01
Protein design involves searching a vast space for sequences that are compatible with a defined structure. This can pose significant computational challenges. Cluster expansion is a technique that can accelerate the evaluation of protein energies by generating a simple functional relationship between sequence and energy. The method consists of several steps. First, for a given protein structure, a training set of sequences with known energies is generated. Next, this training set is used to expand energy as a function of clusters consisting of single residues, residue pairs, and higher order terms, if required. The accuracy of the sequence-based expansion is monitored and improved using cross-validation testing and iterative inclusion of additional clusters. As a trade-off for evaluation speed, the cluster-expansion approximation causes prediction errors, which can be reduced by including more training sequences, including higher order terms in the expansion, and/or reducing the sequence space described by the cluster expansion. This article analyzes the sources of error and introduces a method whereby accuracy can be improved by judiciously reducing the described sequence space. The method is applied to describe the sequence-stability relationship for several protein structures: coiled-coil dimers and trimers, a PDZ domain, and T4 lysozyme as examples with computationally derived energies, and SH3 domains in amphiphysin-1 and endophilin-1 as examples where the expanded pseudo-energies are obtained from experiments. Our open-source software package Cluster Expansion Version 1.0 allows users to expand their own energy function of interest and thereby apply cluster expansion to custom problems in protein design. © 2010 Wiley Periodicals, Inc.
Mapping Migratory Bird Prevalence Using Remote Sensing Data Fusion
Swatantran, Anu; Dubayah, Ralph; Goetz, Scott; Hofton, Michelle; Betts, Matthew G.; Sun, Mindy; Simard, Marc; Holmes, Richard
2012-01-01
Background Improved maps of species distributions are important for effective management of wildlife under increasing anthropogenic pressures. Recent advances in lidar and radar remote sensing have shown considerable potential for mapping forest structure and habitat characteristics across landscapes. However, their relative efficacies and integrated use in habitat mapping remain largely unexplored. We evaluated the use of lidar, radar and multispectral remote sensing data in predicting multi-year bird detections or prevalence for 8 migratory songbird species in the unfragmented temperate deciduous forests of New Hampshire, USA. Methodology and Principal Findings A set of 104 predictor variables describing vegetation vertical structure and variability from lidar, phenology from multispectral data and backscatter properties from radar data were derived. We tested the accuracies of these variables in predicting prevalence using Random Forests regression models. All data sets showed more than 30% predictive power with radar models having the lowest and multi-sensor synergy (“fusion”) models having highest accuracies. Fusion explained between 54% and 75% variance in prevalence for all the birds considered. Stem density from discrete return lidar and phenology from multispectral data were among the best predictors. Further analysis revealed different relationships between the remote sensing metrics and bird prevalence. Spatial maps of prevalence were consistent with known habitat preferences for the bird species. Conclusion and Significance Our results highlight the potential of integrating multiple remote sensing data sets using machine-learning methods to improve habitat mapping. Multi-dimensional habitat structure maps such as those generated from this study can significantly advance forest management and ecological research by facilitating fine-scale studies at both stand and landscape level. PMID:22235254
Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.
Jiang, Z; Chen, W; Burkhart, C
2013-11-01
Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Multi-scale hippocampal parcellation improves atlas-based segmentation accuracy
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; McHugo, Maureen; Heckers, Stephan; Landman, Bennett A.
2017-02-01
Known for its distinct role in memory, the hippocampus is one of the most studied regions of the brain. Recent advances in magnetic resonance imaging have allowed for high-contrast, reproducible imaging of the hippocampus. Typically, a trained rater takes 45 minutes to manually trace the hippocampus and delineate the anterior from the posterior segment at millimeter resolution. As a result, there has been a significant desire for automated and robust segmentation of the hippocampus. In this work we use a population of 195 atlases based on T1-weighted MR images with the left and right hippocampus delineated into the head and body. We initialize the multi-atlas segmentation to a region directly around each lateralized hippocampus to both speed up and improve the accuracy of registration. This initialization allows for incorporation of nearly 200 atlases, an accomplishment which would typically involve hundreds of hours of computation per target image. The proposed segmentation results in a Dice similiarity coefficient over 0.9 for the full hippocampus. This result outperforms a multi-atlas segmentation using the BrainCOLOR atlases (Dice 0.85) and FreeSurfer (Dice 0.75). Furthermore, the head and body delineation resulted in a Dice coefficient over 0.87 for both structures. The head and body volume measurements also show high reproducibility on the Kirby 21 reproducibility population (R2 greater than 0.95, p < 0.05 for all structures). This work signifies the first result in an ongoing work to develop a robust tool for measurement of the hippocampus and other temporal lobe structures.
Mapping migratory bird prevalence using remote sensing data fusion.
Swatantran, Anu; Dubayah, Ralph; Goetz, Scott; Hofton, Michelle; Betts, Matthew G; Sun, Mindy; Simard, Marc; Holmes, Richard
2012-01-01
Improved maps of species distributions are important for effective management of wildlife under increasing anthropogenic pressures. Recent advances in lidar and radar remote sensing have shown considerable potential for mapping forest structure and habitat characteristics across landscapes. However, their relative efficacies and integrated use in habitat mapping remain largely unexplored. We evaluated the use of lidar, radar and multispectral remote sensing data in predicting multi-year bird detections or prevalence for 8 migratory songbird species in the unfragmented temperate deciduous forests of New Hampshire, USA. A set of 104 predictor variables describing vegetation vertical structure and variability from lidar, phenology from multispectral data and backscatter properties from radar data were derived. We tested the accuracies of these variables in predicting prevalence using Random Forests regression models. All data sets showed more than 30% predictive power with radar models having the lowest and multi-sensor synergy ("fusion") models having highest accuracies. Fusion explained between 54% and 75% variance in prevalence for all the birds considered. Stem density from discrete return lidar and phenology from multispectral data were among the best predictors. Further analysis revealed different relationships between the remote sensing metrics and bird prevalence. Spatial maps of prevalence were consistent with known habitat preferences for the bird species. Our results highlight the potential of integrating multiple remote sensing data sets using machine-learning methods to improve habitat mapping. Multi-dimensional habitat structure maps such as those generated from this study can significantly advance forest management and ecological research by facilitating fine-scale studies at both stand and landscape level.
Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao
2015-09-01
The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.
Improving the recommender algorithms with the detected communities in bipartite networks
NASA Astrophysics Data System (ADS)
Zhang, Peng; Wang, Duo; Xiao, Jinghua
2017-04-01
Recommender system offers a powerful tool to make information overload problem well solved and thus gains wide concerns of scholars and engineers. A key challenge is how to make recommendations more accurate and personalized. We notice that community structures widely exist in many real networks, which could significantly affect the recommendation results. By incorporating the information of detected communities in the recommendation algorithms, an improved recommendation approach for the networks with communities is proposed. The approach is examined in both artificial and real networks, the results show that the improvement on accuracy and diversity can be 20% and 7%, respectively. This reveals that it is beneficial to classify the nodes based on the inherent properties in recommender systems.
NASA Astrophysics Data System (ADS)
Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei
2017-12-01
The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.
Zhu, Yanan; Ouyang, Qi; Mao, Youdong
2017-07-21
Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; ...
2016-02-24
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J. M.; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C.; Glanville, Helen C.; Jones, Davey L.; Angel, Roey; Salminen, Janne; Newton, Ryan J.; Bürgmann, Helmut; Ingram, Lachlan J.; Hamer, Ute; Siljanen, Henri M. P.; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C.; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C.; Lopes, Ana R.; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S.; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S.; Basiliko, Nathan; Nemergut, Diana R.
2016-01-01
Microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology. PMID:26941732
Graham, Emily B; Knelman, Joseph E; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J M; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C; Glanville, Helen C; Jones, Davey L; Angel, Roey; Salminen, Janne; Newton, Ryan J; Bürgmann, Helmut; Ingram, Lachlan J; Hamer, Ute; Siljanen, Henri M P; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C; Lopes, Ana R; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S; Basiliko, Nathan; Nemergut, Diana R
2016-01-01
Microorganisms are vital in mediating the earth's biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: 'When do we need to understand microbial community structure to accurately predict function?' We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.
Rayarao, Geetha; Biederman, Robert W W; Williams, Ronald B; Yamrozik, June A; Lombardi, Richard; Doyle, Mark
2018-01-01
To establish the clinical validity and accuracy of automatic thresholding and manual trimming (ATMT) by comparing the method with the conventional contouring method for in vivo cardiac volume measurements. CMR was performed on 40 subjects (30 patients and 10 controls) using steady-state free precession cine sequences with slices oriented in the short-axis and acquired contiguously from base to apex. Left ventricular (LV) volumes, end-diastolic volume, end-systolic volume, and stroke volume (SV) were obtained with ATMT and with the conventional contouring method. Additionally, SV was measured independently using CMR phase velocity mapping (PVM) of the aorta for validation. Three methods of calculating SV were compared by applying Bland-Altman analysis. The Bland-Altman standard deviation of variation (SD) and offset bias for LV SV for the three sets of data were: ATMT-PVM (7.65, [Formula: see text]), ATMT-contours (7.85, [Formula: see text]), and contour-PVM (11.01, 4.97), respectively. Equating the observed range to the error contribution of each approach, the error magnitude of ATMT:PVM:contours was in the ratio 1:2.4:2.5. Use of ATMT for measuring ventricular volumes accommodates trabeculae and papillary structures more intuitively than contemporary contouring methods. This results in lower variation when analyzing cardiac structure and function and consequently improved accuracy in assessing chamber volumes.
Accuracy Improvement of Neutron Nuclear Data on Minor Actinides
NASA Astrophysics Data System (ADS)
Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi
2015-05-01
Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.
Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos
2016-01-01
The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Han-miao, E-mail: chenghanmiao@hust.edu.cn; Li, Hong-bin, E-mail: lihongbin@hust.edu.cn; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074
The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy classmore » 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.« less
Overcoming Sequence Misalignments with Weighted Structural Superposition
Khazanov, Nickolay A.; Damm-Ganamet, Kelly L.; Quang, Daniel X.; Carlson, Heather A.
2012-01-01
An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian-weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD’s robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, SSM, CE, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics-scale analysis. HwRMSD can align homologs with low sequence identity and large conformational differences, cases where both sequence-based and structural-based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence-alignment method, substitution matrix, and gap parameters for each unique pair of homologs. PMID:22733542
Ryabov, Yaroslav; Fushman, David
2008-01-01
We present a simple and robust approach that uses the overall rotational diffusion tensor as a structural constraint for domain positioning in multidomain proteins and protein-protein complexes. This method offers the possibility to use NMR relaxation data for detailed structure characterization of such systems provided the structures of individual domains are available. The proposed approach extends the concept of using long-range information contained in the overall rotational diffusion tensor. In contrast to the existing approaches, we use both the principal axes and principal values of protein’s rotational diffusion tensor to determine not only the orientation but also the relative positioning of the individual domains in a protein. This is achieved by finding the domain arrangement in a molecule that provides the best possible agreement with all components of the overall rotational diffusion tensor derived from experimental data. The accuracy of the proposed approach is demonstrated for two protein systems with known domain arrangement and parameters of the overall tumbling: the HIV-1 protease homodimer and Maltose Binding Protein. The accuracy of the method and its sensitivity to domain positioning is also tested using computer-generated data for three protein complexes, for which the experimental diffusion tensors are not available. In addition, the proposed method is applied here to determine, for the first time, the structure of both open and closed conformations of Lys48-linked di-ubiquitin chain, where domain motions render impossible accurate structure determination by other methods. The proposed method opens new avenues for improving structure characterization of proteins in solution. PMID:17550252
NASA Astrophysics Data System (ADS)
Hou, Zeyu; Lu, Wenxi
2018-05-01
Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald
2005-10-01
Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.
Zhang, Jiawen; He, Shaohui; Wang, Dahai; Liu, Yangpeng; Yao, Wenbo; Liu, Xiabing
2018-01-01
Based on the operating Chegongzhuang heat-supplying tunnel in Beijing, the reliability of its lining structure under the action of large thrust and thermal effect is studied. According to the characteristics of a heat-supplying tunnel service, a three-dimensional numerical analysis model was established based on the mechanical tests on the in-situ specimens. The stress and strain of the tunnel structure were obtained before and after the operation. Compared with the field monitoring data, the rationality of the model was verified. After extracting the internal force of the lining structure, the improved method of subset simulation was proposed as the performance function to calculate the reliability of the main control section of the tunnel. In contrast to the traditional calculation method, the analytic relationship between the sample numbers in the subset simulation method and Monte Carlo method was given. The results indicate that the lining structure is greatly influenced by coupling in the range of six meters from the fixed brackets, especially the tunnel floor. The improved subset simulation method can greatly save computation time and improve computational efficiency under the premise of ensuring the accuracy of calculation. It is suitable for the reliability calculation of tunnel engineering, because “the lower the probability, the more efficient the calculation.” PMID:29401691
Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D
2007-04-01
Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.
Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.
2008-01-01
Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
NASA Astrophysics Data System (ADS)
Gu, Yongzhen; Duan, Baoyan; Du, Jingli
2018-05-01
The electrostatically controlled deployable membrane antenna (ECDMA) is a promising space structure due to its low weight, large aperture and high precision characteristics. However, it is an extreme challenge to describe the coupled field between electrostatic and membrane structure accurately. A direct coupled method is applied to solve the coupled problem in this paper. Firstly, the membrane structure and electrostatic field are uniformly described by energy, considering the coupled problem is an energy conservation phenomenon. Then the direct coupled electrostatic-structural field governing equilibrium equations are obtained by energy variation approach. Numerical results show that the direct coupled method improves the computing efficiency by 36% compared with the traditional indirect coupled method with the same level accuracy. Finally, the prototype has been manufactured and tested and the ECDMA finite element simulations show good agreement with the experiment results as the maximum surface error difference is 6%.
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kahn, W. D.
1984-01-01
The spaceborne gravity gradiometer is a potential sensor for mapping the fine structure of the Earth's gravity field. Error analyses were performed to investigate the accuracy of the determination of the Earth's gravity field from a gravity field satellite mission. The orbital height of the spacecraft is the dominating parameter as far as gravity field resolution and accuracies are concerned.
A theoretical and experimental benchmark study of core-excited states in nitrogen
Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan; ...
2018-02-14
The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less
A theoretical and experimental benchmark study of core-excited states in nitrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myhre, Rolf H.; Wolf, Thomas J. A.; Cheng, Lan
The high resolution near edge X-ray absorption fine structure spectrum of nitrogen displays the vibrational structure of the core-excited states. This makes nitrogen well suited for assessing the accuracy of different electronic structure methods for core excitations. We report high resolution experimental measurements performed at the SOLEIL synchrotron facility. These are compared with theoretical spectra calculated using coupled cluster theory and algebraic diagrammatic construction theory. The coupled cluster singles and doubles with perturbative triples model known as CC3 is shown to accurately reproduce the experimental excitation energies as well as the spacing of the vibrational transitions. In conclusion, the computationalmore » results are also shown to be systematically improved within the coupled cluster hierarchy, with the coupled cluster singles, doubles, triples, and quadruples method faithfully reproducing the experimental vibrational structure.« less
El-Kadi, A. I.; Torikai, J.D.
2001-01-01
The objective of this paper is to identify water-flow patterns in part of an active landslide, through the use of numerical simulations and data obtained during a field study. The approaches adopted include measuring rainfall events and pore-pressure responses in both saturated and unsaturated soils at the site. To account for soil variability, the Richards equation is solved within deterministic and stochastic frameworks. The deterministic simulations considered average water-retention data, adjusted retention data to account for stones or cobbles, retention functions for a heterogeneous pore structure, and continuous retention functions for preferential flow. The stochastic simulations applied the Monte Carlo approach which considers statistical distribution and autocorrelation of the saturated conductivity and its cross correlation with the retention function. Although none of the models is capable of accurately predicting field measurements, appreciable improvement in accuracy was attained using stochastic, preferential flow, and heterogeneous pore-structure models. For the current study, continuum-flow models provide reasonable accuracy for practical purposes, although they are expected to be less accurate than multi-domain preferential flow models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Koushik; Jawulski, Konrad; Pastorczak, Ewa
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples ofmore » systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.« less
Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules
Desai, Aarti; Singh, Vivek K.; Jere, Abhay
2016-01-01
Introduction Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense) that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage. Results The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with ‘High’ reliability scoring), DEREK (accuracy = 72.73% and CCR = 71.44%) and TOPKAT (accuracy = 60.00% and CCR = 61.67%). Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%), the coverage was very low (only 10 out of 77 molecules were predicted reliably). Conclusions Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing. PMID:27271321
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less
STRUM: structure-based prediction of protein stability changes upon single-point mutation.
Quan, Lijun; Lv, Qiang; Zhang, Yang
2016-10-01
Mutations in human genome are mainly through single nucleotide polymorphism, some of which can affect stability and function of proteins, causing human diseases. Several methods have been proposed to predict the effect of mutations on protein stability; but most require features from experimental structure. Given the fast progress in protein structure prediction, this work explores the possibility to improve the mutation-induced stability change prediction using low-resolution structure modeling. We developed a new method (STRUM) for predicting stability change caused by single-point mutations. Starting from wild-type sequences, 3D models are constructed by the iterative threading assembly refinement (I-TASSER) simulations, where physics- and knowledge-based energy functions are derived on the I-TASSER models and used to train STRUM models through gradient boosting regression. STRUM was assessed by 5-fold cross validation on 3421 experimentally determined mutations from 150 proteins. The Pearson correlation coefficient (PCC) between predicted and measured changes of Gibbs free-energy gap, ΔΔG, upon mutation reaches 0.79 with a root-mean-square error 1.2 kcal/mol in the mutation-based cross-validations. The PCC reduces if separating training and test mutations from non-homologous proteins, which reflects inherent correlations in the current mutation sample. Nevertheless, the results significantly outperform other state-of-the-art methods, including those built on experimental protein structures. Detailed analyses show that the most sensitive features in STRUM are the physics-based energy terms on I-TASSER models and the conservation scores from multiple-threading template alignments. However, the ΔΔG prediction accuracy has only a marginal dependence on the accuracy of protein structure models as long as the global fold is correct. These data demonstrate the feasibility to use low-resolution structure modeling for high-accuracy stability change prediction upon point mutations. http://zhanglab.ccmb.med.umich.edu/STRUM/ CONTACT: qiang@suda.edu.cn and zhng@umich.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
STRUM: structure-based prediction of protein stability changes upon single-point mutation
Quan, Lijun; Lv, Qiang; Zhang, Yang
2016-01-01
Motivation: Mutations in human genome are mainly through single nucleotide polymorphism, some of which can affect stability and function of proteins, causing human diseases. Several methods have been proposed to predict the effect of mutations on protein stability; but most require features from experimental structure. Given the fast progress in protein structure prediction, this work explores the possibility to improve the mutation-induced stability change prediction using low-resolution structure modeling. Results: We developed a new method (STRUM) for predicting stability change caused by single-point mutations. Starting from wild-type sequences, 3D models are constructed by the iterative threading assembly refinement (I-TASSER) simulations, where physics- and knowledge-based energy functions are derived on the I-TASSER models and used to train STRUM models through gradient boosting regression. STRUM was assessed by 5-fold cross validation on 3421 experimentally determined mutations from 150 proteins. The Pearson correlation coefficient (PCC) between predicted and measured changes of Gibbs free-energy gap, ΔΔG, upon mutation reaches 0.79 with a root-mean-square error 1.2 kcal/mol in the mutation-based cross-validations. The PCC reduces if separating training and test mutations from non-homologous proteins, which reflects inherent correlations in the current mutation sample. Nevertheless, the results significantly outperform other state-of-the-art methods, including those built on experimental protein structures. Detailed analyses show that the most sensitive features in STRUM are the physics-based energy terms on I-TASSER models and the conservation scores from multiple-threading template alignments. However, the ΔΔG prediction accuracy has only a marginal dependence on the accuracy of protein structure models as long as the global fold is correct. These data demonstrate the feasibility to use low-resolution structure modeling for high-accuracy stability change prediction upon point mutations. Availability and Implementation: http://zhanglab.ccmb.med.umich.edu/STRUM/ Contact: qiang@suda.edu.cn and zhng@umich.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27318206
Geometrical accuracy improvement in flexible roll forming lines
NASA Astrophysics Data System (ADS)
Larrañaga, J.; Berner, S.; Galdos, L.; Groche, P.
2011-01-01
The general interest to produce profiles with variable cross section in a cost-effective way has increased in the last few years. The flexible roll forming process allows producing profiles with variable cross section lengthwise in a continuous way. Until now, only a few flexible roll forming lines were developed and built up. Apart from the flange wrinkling along the transition zone of u-profiles with variable cross section, the process limits have not been investigated and solutions for shape deviations are unknown. During the PROFOM project a flexible roll forming machine has been developed with the objective of producing high technological components for automotive body structures. In order to investigate the limits of the process, different profile geometries and steel grades including high strength steels have been applied. During the first experimental tests, several errors have been identified, as a result of the complex stress states generated during the forming process. In order to improve the accuracy of the target profiles and to meet the tolerance demands of the automotive industry, a thermo-mechanical solution has been proposed. Additional mechanical devices, supporting flexible the roll forming process, have been implemented in the roll forming line together with local heating techniques. The combination of both methods shows a significant increase of the accuracy. In the present investigation, the experimental results of the validation process are presented.
Bonacina, Silvia; Cancer, Alice; Lanzi, Pier Luca; Lorusso, Maria Luisa; Antonietti, Alessandro
2015-01-01
The core deficit underlying developmental dyslexia (DD) has been identified in difficulties in dynamic and rapidly changing auditory information processing, which contribute to the development of impaired phonological representations for words. It has been argued that enhancing basic musical rhythm perception skills in children with DD may have a positive effect on reading abilities because music and language share common mechanisms and thus transfer effects from the former to the latter are expected to occur. A computer-assisted training, called Rhythmic Reading Training (RRT), was designed in which reading exercises are combined with rhythm background. Fourteen junior high school students with DD took part to 9 biweekly individual sessions of 30 min in which RRT was implemented. Reading improvements after the intervention period were compared with ones of a matched control group of 14 students with DD who received no intervention. Results indicated that RRT had a positive effect on both reading speed and accuracy and significant effects were found on short pseudo-words reading speed, long pseudo-words reading speed, high frequency long words reading accuracy, and text reading accuracy. No difference in rhythm perception between the intervention and control group were found. Findings suggest that rhythm facilitates the development of reading skill because of the temporal structure it imposes to word decoding. PMID:26500581
Improving Machining Accuracy of CNC Machines with Innovative Design Methods
NASA Astrophysics Data System (ADS)
Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.
2018-03-01
The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Accuracy improvement of multimodal measurement of speed of sound based on image processing
NASA Astrophysics Data System (ADS)
Nitta, Naotaka; Kaya, Akio; Misawa, Masaki; Hyodo, Koji; Numano, Tomokazu
2017-07-01
Since the speed of sound (SOS) reflects tissue characteristics and is expected as an evaluation index of elasticity and water content, the noninvasive measurement of SOS is eagerly anticipated. However, it is difficult to measure the SOS by using an ultrasound device alone. Therefore, we have presented a noninvasive measurement method of SOS using ultrasound (US) and magnetic resonance (MR) images. By this method, we determine the longitudinal SOS based on the thickness measurement using the MR image and the time of flight (TOF) measurement using the US image. The accuracy of SOS measurement is affected by the accuracy of image registration and the accuracy of thickness measurements in the MR and US images. In this study, we address the accuracy improvement in the latter thickness measurement, and present an image-processing-based method for improving the accuracy of thickness measurement. The method was investigated by using in vivo data obtained from a tissue-engineered cartilage implanted in the back of a rat, with an unclear boundary.
NASA Astrophysics Data System (ADS)
Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He
2016-06-01
An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2014YQ060537), and the National Basic Research Program, China (Grant No. 2013CB632803).
New Global Calculation of Nuclear Masses and Fission Barriers for Astrophysical Applications
NASA Astrophysics Data System (ADS)
Möller, P.; Sierk, A. J.; Bengtsson, R.; Ichikawa, T.; Iwamoto, A.
2008-05-01
The FRDM(1992) mass model [1] has an accuracy of 0.669 MeV in the region where its parameters were determined. For the 529 masses that have been measured since, its accuracy is 0.46 MeV, which is encouraging for applications far from stability in astrophysics. We are developing an improved mass model, the FRDM(2008). The improvements in the calculations with respect to the FRDM(1992) are in two main areas. (1) The macroscopic model parameters are better optimized. By simulation (adjusting to a limited set of now known nuclei) we can show that this actually makes the results more reliable in new regions of nuclei. (2) The ground-state deformation parameters are more accurately calculated. We minimize the energy in a four-dimensional deformation space (ɛ2, V3, V4, V6,) using a grid interval of 0.01 in all 4 deformation variables. The (non-finalized) FRDM (2008-a) has an accuracy of 0.596 MeV with respect to the 2003 Audi mass evaluation before triaxial shape degrees of freedom are included (in progress). When triaxiality effects are incorporated preliminary results indicate that the model accuracy will improve further, to about 0.586 MeV. We also discuss very large-scale fission-barrier calculations in the related FRLDM (2002) model, which has been shown to reproduce very satisfactorily known fission properties, for example barrier heights from 70Se to the heaviest elements, multiple fission modes in the Ra region, asymmetry of mass division in fission and the triple-humped structure found in light actinides. In the superheavy region we find barriers consistent with the observed half-lives. We have completed production calculations and obtain barrier heights for 5254 nuclei heavier than A = 170 for all nuclei between the proton and neutron drip lines. The energy is calculated for 5009325 different shapes for each nucleus and the optimum barrier between ground state and separated fragments is determined by use of an ``immersion'' technique.
NASA Astrophysics Data System (ADS)
Chistyy, Y.; Kuzakhmetova, E.; Fazilova, Z.; Tsukanova, O.
2018-03-01
Design issues of junction of bridges and overhead road with approach embankment are studied. The reasons for the formation of deformations in the road structure are indicated. Activities to ensure sustainability and acceleration of the shrinkage of a weak subgrade approach embankment are listed. The necessity of taking into account the man-made impact of the approach embankment on the subgrade behavior is proved. Modern stabilizing agents to improve the properties of used soils in the embankment and the subgrade are suggested. Clarified methodology for determining an active zone of compression in the subgrade under load from the weight of the embankment is described. As an additional condition to the existing methodology for establishing the lower bound of the active zone of compression it is offered to accept the accuracy of evaluation of soil compressibility and determine shrinkage.
Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis
NASA Technical Reports Server (NTRS)
Slojkowski, Steven E.
2014-01-01
Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
Warping an atlas derived from serial histology to 5 high-resolution MRIs.
Tullo, Stephanie; Devenyi, Gabriel A; Patel, Raihaan; Park, Min Tae M; Collins, D Louis; Chakravarty, M Mallar
2018-06-19
Previous work from our group demonstrated the use of multiple input atlases to a modified multi-atlas framework (MAGeT-Brain) to improve subject-based segmentation accuracy. Currently, segmentation of the striatum, globus pallidus and thalamus are generated from a single high-resolution and -contrast MRI atlas derived from annotated serial histological sections. Here, we warp this atlas to five high-resolution MRI templates to create five de novo atlases. The overall goal of this work is to use these newly warped atlases as input to MAGeT-Brain in an effort to consolidate and improve the workflow presented in previous manuscripts from our group, allowing for simultaneous multi-structure segmentation. The work presented details the methodology used for the creation of the atlases using a technique previously proposed, where atlas labels are modified to mimic the intensity and contrast profile of MRI to facilitate atlas-to-template nonlinear transformation estimation. Dice's Kappa metric was used to demonstrate high quality registration and segmentation accuracy of the atlases. The final atlases are available at https://github.com/CobraLab/atlases/tree/master/5-atlas-subcortical.
High-precision radiometric tracking for planetary approach and encounter in the inner solar system
NASA Technical Reports Server (NTRS)
Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.
1989-01-01
The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.
A Parametric Rosetta Energy Function Analysis with LK Peptides on SAM Surfaces.
Lubin, Joseph H; Pacella, Michael S; Gray, Jeffrey J
2018-05-08
Although structures have been determined for many soluble proteins and an increasing number of membrane proteins, experimental structure determination methods are limited for complexes of proteins and solid surfaces. An economical alternative or complement to experimental structure determination is molecular simulation. Rosetta is one software suite that models protein-surface interactions, but Rosetta is normally benchmarked on soluble proteins. For surface interactions, the validity of the energy function is uncertain because it is a combination of independent parameters from energy functions developed separately for solution proteins and mineral surfaces. Here, we assess the performance of the RosettaSurface algorithm and test the accuracy of its energy function by modeling the adsorption of leucine/lysine (LK)-repeat peptides on methyl- and carboxy-terminated self-assembled monolayers (SAMs). We investigated how RosettaSurface predictions for this system compare with the experimental results, which showed that on both surfaces, LK-α peptides folded into helices and LK-β peptides held extended structures. Utilizing this model system, we performed a parametric analysis of Rosetta's Talaris energy function and determined that adjusting solvation parameters offered improved predictive accuracy. Simultaneously increasing lysine carbon hydrophilicity and the hydrophobicity of the surface methyl head groups yielded computational predictions most closely matching the experimental results. De novo models still should be interpreted skeptically unless bolstered in an integrative approach with experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Daniel J
2008-01-01
Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
Bossuyt, Patrick M; Reitsma, Johannes B; Bruns, David E; Gatsonis, Constantine A; Glasziou, Paul P; Irwig, Les; Lijmer, Jeroen G; Moher, David; Rennie, Drummond; de Vet, Henrica C W; Kressel, Herbert Y; Rifai, Nader; Golub, Robert M; Altman, Douglas G; Hooft, Lotty; Korevaar, Daniël A; Cohen, Jérémie F
2015-12-01
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies.
Multiscale study of metal nanoparticles
NASA Astrophysics Data System (ADS)
Lee, Byeongchan
Extremely small structures with reduced dimensionality have emerged as a scientific motif for their interesting properties. In particular, metal nanoparticles have been identified as a fundamental material in many catalytic activities; as a consequence, a better understanding of structure-function relationship of nanoparticles has become crucial. The functional analysis of nanoparticles, reactivity for example, requires an accurate method at the electronic structure level, whereas the structural analysis to find energetically stable local minima is beyond the scope of quantum mechanical methods as the computational cost becomes prohibitingly high. The challenge is that the inherent length scale and accuracy associated with any single method hardly covers the broad scale range spanned by both structural and functional analyses. In order to address this, and effectively explore the energetics and reactivity of metal nanoparticles, a hierarchical multiscale modeling is developed, where methodologies of different length scales, i.e. first principles density functional theory, atomistic calculations, and continuum modeling, are utilized in a sequential fashion. This work has focused on identifying the essential information that bridges two different methods so that a successive use of different methods is seamless. The bond characteristics of low coordination systems have been obtained with first principles calculations, and incorporated into the atomistic simulation. This also rectifies the deficiency of conventional interatomic potentials fitted to bulk properties, and improves the accuracy of atomistic calculations for nanoparticles. For the systematic shape selection of nanoparticles, we have improved the Wulff-type construction using a semi-continuum approach, in which atomistic surface energetics and crystallinity of materials are added on to the continuum framework. The developed multiscale modeling scheme is applied to the rational design of platinum nanoparticles in the range of 2.4 nm to 3.1 nm: energetically favorable structures have been determined in terms of semi-continuum binding energy, and the reactivity of the selected nanoparticle has been investigated based on local density of states from first principles calculations. The calculation suggests that the reactivity landscape of particles is more complex than the simple reactivity of clean surfaces, and the reactivity towards a particular reactant can be predicted for a given structure.
ERIC Educational Resources Information Center
de Bruin, Anique B. H.; Thiede, Keith W.; Camp, Gino; Redford, Joshua
2011-01-01
The ability to monitor understanding of texts, usually referred to as metacomprehension accuracy, is typically quite poor in adult learners; however, recently interventions have been developed to improve accuracy. In two experiments, we evaluated whether generating delayed keywords prior to judging comprehension improved metacomprehension accuracy…
Kim, Taeho; Frank, Cornelia; Schack, Thomas
2017-01-01
Action observation training and motor imagery training have independently been studied and considered as an effective training strategy for improving motor skill learning. However, comparative studies of the two training strategies are relatively few. The purpose of this study was to investigate the effects of action observation training and motor imagery training on the development of mental representation structure and golf putting performance as well as the relation between the changes in mental representation structure and skill performance during the early learning stage. Forty novices were randomly assigned to one of four groups: action observation training, motor imagery training, physical practice and no practice. The mental representation structure and putting performance were measured before and after 3 days of training, then after a 2-day retention period. The results showed that mental representation structure and the accuracy of the putting performance were improved over time through the two types of cognitive training (i.e., action observation training and motor imagery training). In addition, we found a significant positive correlation between changes in mental representation structure and skill performance for the action observation training group only. Taken together, these results suggest that both cognitive adaptations and skill improvement occur through the training of the two simulation states of action, and that perceptual-cognitive changes are associated with the change of skill performance for action observation training. PMID:29089881
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-01-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671
On a fast calculation of structure factors at a subatomic resolution.
Afonine, P V; Urzhumtsev, A
2004-01-01
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.
Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A
2018-02-23
Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from the use of spatial aids based upon satellite enumeration. These spatial aids can guide costly IRS planning and implementation leading to attainment of higher spatial coverage, and likely improve disease impact.